SharePoint Lists into Microsoft Fabric Part 3! (ENG)

Published on April 15, 2026 at 10:22 AM

FabCon Atlanta 2026 was overwhelmed with tons of new announcements around SQL and Fabric. A lot of sexy ones! Some very standard and logic. This week, I tested a small new preview feature, which actually was a giant step forward in my opinion!

 

We now have a SharePoint List Mirror! Whoesaa!

 

After blogging about the Graph API in a notebook (Integrate SharePoint lists instead of using Onelake (ENG) | youranalyticalbridge), and later adopting the copy activity in data factory (A SharePoint file shortcut and SharePoint list copy activity! But does it work? (ENG) | youranalyticalbridge), now we could have found the best solution for my challenge?! Let’s dive into it!

 

Mirrored SharePoint List

 

The SharePoint list Mirror is in preview since March 2026, it was not announced as a great new feature at the keynote. But sometimes the smallest changes have the biggest effect! Let’s dive in.

In the Microsoft Fabric workspace choose a new item to make a connection to the mirrored SharePoint List.

 

  • Configure a connection, or choose one that already exists.
  • Once loaded, you can choose your list(s).

 

Easy and done.

 

Let’s check the mirrored database lists, all informatie is active, so let’s dive in.

It only costs seconds to refresh! And now we can use the data in our solution.

This is even faster than using the copy activity last time.

All data of the 3 lists are available in the mirrored database and I can query them via the SQL-Endpoint.

Next step is to get my data in my lakehouse as I did before. In previous blogs I loaded the data to my lh-raw lakehouse directly. This lakehouse serves as my bronze layer in the architecture.

 

Question now becomes, how fast is the total pipeline in total, compared with the direct copy activity in my last blog post?

 

Spark SQL Table

 

I can’t copy the data from the mirrored databased via a copy activity, so I need to take a different approach.

Since I can query the data via the SQL Endpoint, I can use Spark SQL for example to move the data.

Unfortunately, reading data via Spark SLQ does not work, since my lakehouse does not has a schema, according to the error:


“Mirrored DB is not accessible with code in this Notebook. You are not able to access Mirrored DB with code in Notebook unless you choose a Lakehouse with Schema support as default Lakehouse.”

 

So reading with: “FROM workspace.Planvorming.dbo.post”, did not work unfortunately.

 

So I created a schema Lakehouse, and suddenly it worked!

So I can use Spark SQL, with a schema Lakehouse to interact with the mirrored data.

Therefore, I can  create a table based on the mirrored data with the following, and it works.

 

%sql

CREATE OR REPLACE TABLE lh_raw.planvorming_post2

USING DELTA

AS

SELECT Title as Title,

'actuele code' as ActueleCode,

'afkorting actuele code' as AfkortingActueleCode,

'indicatie rapport' as IndicatieRapport

FROM XXXXX.Planvorming.dbo.Afsluiting

 

In total we have build the following flow as option 1:

Shortcut

 

So with a schema Lakehouse, it works, but there is another way.

As per Microsoft documentation, I need to create a shortcut to read the data.

https://learn.microsoft.com/en-us/fabric/mirroring/explore-onelake-shortcut

 

So I tested it and created a shortcut in my lakehouse to the mirrored databases.

 

 

This works indeed as expected and is really fast!

In total this flow represents option 2:  Mirrored database with shortcut to Lakehouse

Besides these 2 options we still have option 3, from my previous blog: The SharePoint List copy activity in a pipeline, that looks as follows:

Conclusion

To Conlude, we have identified the following options: 

  1. Mirrored Database with Spark SQL Table in schema Lakehouse
  2. Mirrored database with shortcut to Lakehouse
  3. The SharePoint List copy activity in a pipeline

 

All of these options are workable solutions and appear to be fast enough for my situation.

If your goal is fastest time-to-value with minimal ETL, start with Mirroring. It’s designed to avoid complex ETL and continuously replicate SharePoint List data into OneLake. 

 

If your goal is Lakehouse-first engineering and Spark usage, choose Mirroring + Lakehouse table shortcut. It gives you the managed replication benefits of mirroring while making the data convenient for notebooks and Lakehouse workflows.

 

If your goal is maximum control and production-grade repeatability under your own governance, use Pipeline Copy Activity. 

 

From my perspective, there is no significant difference between these three options when it comes to performance. The shortcut is the simplest approach and, in practice, it just works. The main downside is that you need to keep the mirroring active, which introduces an additional dependency.

The bigger question is:

  • Whether this setup is actually needed for your SharePoint list?
  • How frequently does the data change?
  • And equally important, how large is the dataset you are dealing with?

 

These factors have a major impact on the final solution. A relatively small list with infrequent changes may not justify continuous mirroring and could be handled perfectly fine with a simpler ingestion pattern. On the other hand, larger datasets or lists that change often may benefit more from a mirrored approach to ensure data is always up to date with minimal operational overhead.

 

Ultimately, the right choice is less about raw performance and more about data volume, change frequency, and the level of complexity you want to introduce and maintain in the long run.