This article includes key Switch terminology, how assets are organized on the Platform and the three workflow stages.
How Your Assets are Organized on the Platform
Please ensure all users have either Microsoft Edge, Google Chrome or Mozilla Firefox web browsers installed for use with the Platform. Please note that neither Microsoft Internet Explorer nor Safari are currently supported.
The Platform organizes building data to reflect device and site management in the real world. Our asset hierarchy is:
Portfolio → Sites → Equipment or Device → Data Points
This hierarchy of data allows users to move from equipment-level analysis up to portfolio-wide performance benchmarking with ease.
Fundamentals of the Switch Workflow
The process of setting up the Platform is as follows:
Integrate → Analyze → Act
See below for what this looks like in practice.
The Three Workflow Stages
The three stages above are broken down into the steps users must perform at each stage.
Table 1. Platform Usage Stages
1. Identify Data Sources
2. Connect Data Sources to Platform
3. Data Cleansing
4. Point Selection
5. Apply Templates
6. Tag Data
1. Deploy Logic
2. Evaluate Alerts
1. Create Events
2. Track Events
3. Measure and Verify Impact
A high-level summary of each is provided in this section.
Key Terms and Relationships
The table above introduces some key Platform terminology, including:
Each of these terms are major Platform components or features and their meaning and workflow relationships can be summarized as follows:
Figure 2. The Key Concepts.
Tagging and templating are included as part of the Data icon in the workflow above. These are usually performed as a one-time configuration step during the integration of the data source.
Summary of Workflow Stages
The first stage of any project is to identify your data sources. The Platform supports a wide variety of data sources, from on-premise data collection, to internet protocols, to 3rd party APIs. Once a data source is identified and connected, the data is continually drawn into the Platform servers (a.k.a. 'the cloud'). Data source management is conducted in the Site Builder feature.
Processing Data (Cleansing, Templating, Tagging)
Once a data source is connected, the incoming data must be subject to:
- ‘Data Cleansing’ - checked for quality, formatting and consistency, and edited for readability.
- ‘Point Selection’ - Filtered down to the data points of interest.
- ‘Templating’ - Annotated with key standardized metadata to support Platform processing and formatting.
- ‘Tagging’ - Grouped into user-defined functional groups to support the deployment of analytics and general in-platform data filtering.
These processes essentially entail working with a table of imported data. This can be conducted in-platform using the built-in data management tools (Site Builder and Tagging), or offline in Excel or similar data manipulation tools. If this process is conducted outside of the Platform, the processed data can be easily uploaded back into the Platform.
Figure 3. Processing Point Data (Tagging Not Shown).
Switch cleans all incoming data points so that they’re immediately ready for any future site analyses.
Once configured, clean data streams into the Platform, where it is continuously analyzed routines are run based on this analysis. Switch accomplishes this by deploying rule-based fault detection modules, benchmarking logic and advanced calculation routines from the Logic Builder feature.
These analytics elements are normally built as generic (i.e. ‘universal’) modules using standardized point templates, and then sharing to equipment on specific sites using their tag groups. By building logic this way, it can be reused on multiple instances of equipment across multiple sites.
Using points from the Processing Data section above, we have created a 'Hot Space Temperature' logic module example using the relevant templates.
Figure 4. A Generic Logic Module Based on Templates.
This module triggers when the 'Space Temperature' point reads a higher value than the 'Cooling Setpoint' point and the 'Mechanical AHU Status' is 'ON'. The logic module uses only standardized templates in its definition and has no specific references to the VAV equipment Switch selected points for in the Processing Data section.
In order to deploy this logic to the VAV from above, we share the module to a tag group. In this case, we would share to a VAV tag group and the logic module layers would match to the templates we assigned to the VAV data points.
Each logic module contains a record of the devices and points it matches with in the Platform’s Logic Builder.
The results from the deployed logic modules are visible in Alerts Analysis. This feature summarizes the alerts generated by site, equipment, severity and time period, enabling users to drill into alerts to discover optimization opportunities or emergent mechanical faults. Users can then record identified issues and opportunities as events.
Figure 5. Alerts Analysis Overview.
Figure 6. Alert Drill Down to Trend Data.
With events, opportunities are recorded, updated, shared, commented on and tracked over time to support the action-oriented side of this workflow, turning analysis into real-world actions.
Figure 7. A New Event.