What differentiates power BI from other tools?
PowerBI has a unique foothold among the other BI tools operating in it’s space. Leveraging the dominance of the office suite of tools, power BI is literally standing on the shoulders of a giant.
Consider the gartner magic quadrant for business intelligence and analytics. Other contenders such as IBM, TIBCO and SAP notably have platforms backing them up as well, but there’s a crucial differentiation there. Those platforms are ERP’s, which are functionality more prescriptive than what office is.
Tableau and Qlik are vigorously competing with powerBI, but they don’t have the same platform to leverage. HBR has an interesting take on the platform business model. They identify the three goals platforms achieve;
- The toolbox – infrastructure to provide interactions between participants.
- The magnet – ability to pull in producers and consumers of the service.
- The matchmaker – facilitating good matches between producers and consumers.
Office 365, sharepoint, and powerBI provide the toolbox. Both powerBI and sharepoint have developer marketplaces, in other words the magnet. Integrated sharing of data between all of these platforms represent the matchmaker. Incidentally, that’s what Microsofts aquisition of Yammer was all about.
As it becomes clear that Microsoft is moving to a software as a service model, the way we deliver BI products and insights will change slightly. Just as waterfall was eaten by agile development, and as Inmon was eaten by Kimball approaches, classic data warehousing will be eaten by something else.
Kimball believes that the field will not go away. There’s an interesting prediction where he points out that traditional DW structures will continue to exist and grow, while non-structured data forms will extend those structures. Fundamentally, the whole point of the data warehouse is precomputation. That methodology will likely not go away, but will be challenged by methodologies that allow for faster execution by better matchmaking of data.
The data mart/warehouse space has traditionally been a poorly executed space, likely due to the nature of it’s inflexibility. It typically suffers a long development execution time, as the data mart or warehouse have to follow the traditional software development cycle. That makes it hard to garner trust in the business, and build trust in the data. Information is timely, the longer it takes to get it, the less valuable it becomes. Software development is not data science.
The sharepoint, office, and powerBI stack all exist with every connector you can imagine. You can share data between objects in the platform, or leverage the multitude of connectors to pull in Oracle, HANA, ODATA, JSON, linkedin or from the web itself. That means you can extend your traditional data warehouse with just about anything you want.
PowerBI offers a semantic model, which is really not new. What is new, is the ability to build that semantic model from just about anything. Powerquery can offer some simple transformations while pulling that data into the model. More complex calculations can be done in DAX to create calculated measures and columns. Microsoft has effectively taken the SSAS tabular engine, and built it into powerBI. When developing locally, that instance runs natively in the power BI desktop application. Effectively, powerBI in the cloud is the same data engine, which Microsoft is a free service provided with powerBI. This is the same idea as SQL Server in the cloud. The speed of the application (hardware and software) has just been commoditized.
This is where Microsoft will eat away at the traditional DW methodologies. Rapid prototyping can occur within the model, where it can be leveraged in the cloud where computational power is very cheap. In other words, these rapid prototypes can be used to immediately deliver and answer questions. If the information sticks and is useful, the underlying model can be refactored without disrupting the behaviour you’re trying to orchestrate. The power of the cloud enables that rapid prototyping to occur with larger and dirtier data structures. In other words, there will be less need for pre-computation.
It’s not the speed at which you can process that determines success, but rather the speed at which your data scientists/developers/BA’s can iterate.