top of page

Generated Maps Dashboard

Updated: 2 days ago

Content Links


Dynamically and strategically pull data from maps that pass your configured filter into one map as a dashboard so you can focus and drill down into specific information across projects to gain insight quickly, monitor your projects and their progress, and make informed decisions.

Configuring A Map As A Dashboard

First, you'll want to create a new map that will be used as your dashboard.

From the Maps page, you'll click the "Create New Job" button, name it, and select a job model. Optionally, you can click on the Job Settings of your new job to modify the Map Styles as needed, add any job attributes, determine if Attribute History should be enabled, or anything else to prepare the job at its top level.


Navigate to the Project Management page (1) and click on the "Dashboards" option (2) under the "Reporting" section of the menu towards the left. Click that blue "+" button (3) to create a new dashboard.


After naming the dashboard, you'll choose "Map" for the dashboard type (more options such as using Google Sheets will be coming soon).


Underneath "Dashboard Job," you'll select the blank map you created. This is the map that will be updated with data aggregated from other job(s) that you filter.


You'll configure this filter next, under "Job Filter." Here you'll create your own conditions for what kind of jobs should be used for pulling data. This uses the Logic Editor. (Using the above logic will allow all jobs to be pulled from.)


Next, the "Node Filter" allows you to create conditions for which specific nodes should be used for pulling data from them. (The above logic will bring in all nodes from job(s) that make it through the job filter.)


The "Configure Attributes" step is where you will specify the attributes on nodes in your dashboard map; the "Attribute Value" will be calculated from data on nodes in the jobs you're pulling data from. It can be as simple as mapping the attribute from those jobs you are pulling from to the attribute in your dashboard map, or it can be as complex as taking information from the jobs you are pulling from, running computations, and storing the result in an attribute in the dashboard map.


You can also bring in photos from jobs using the Photo Filters, optionally. If you don't want to bring in photos, you can leave the filter alone or set it to "False."


If you wanted to bring over all the photos on a node, you could set the Photo Filter to "TRUE" by making the block a "Literal" and selecting "True" (like the logic block depicted above).


If you do choose to use the Photo Filter, there will also be a Starred Photo Filter step after this.


This Starred Photo Filter that appears when you configure a photo filter will star whichever photo passes the filter, making it the node's main photo. (If multiple photos pass the filter, the first one will be starred.) If you don't want any photos to be starred, you can set this filter to "FALSE." (This will remove the star from a starred photo that passes the filter.) If you want a starred photo in your source job(s) to remain starred in your dashboard, you will have to configure the "starred photo filter " in such a way that the starred photo passes through the filter.


The starred filter is given the photo's ID, the full job that the photo was uploaded to, and other photo information that can be viewed in the Logic Editor's "Inputs" debugger when you open the "Current Available Dataset." This is the set of data that you can work with when setting up the filter.


Finally, if you filled out the initial Photo Filter, you will also have the option to include any markers and traces that are on the photos you're bringing in to your dashboard map.

Check the checkbox to bring in any and all markers and traces found on the photo. If you want the photo without any of the data on it, leave the box unchecked.


Once you're done with the finishing touches on your dashboard, click "Finish Creating Dashboard."


Additional Info

These dashboard maps are updated from their source job(s) every night, but if you need to make sure you have the most up-to-date information, you can manually update the dashboard map from the Project Management page. Under "Dashboards," select your dashboard to open it and click the "Bulk Update" blue button towards the top right.


While you can edit and add data to your map dashboard, these changes will have no impact on the data in your source job(s) - only data changes in the source job(s) will affect the data stored in your map dashboard (when the map dashboard is updated).


We often use a map dashboard for deployment maps; this helps us collect data efficiently and effectively to cover more ground. For example, we may notice we need to collect a pickup or two that is in proximity to a location where we need to collect data for a whole job.


We've also used map dashboards for managing permits. We'll color-code nodes based on permit status of all the poles across our projects that need permits and include a link to the original node if we need to continue work on it in its source job.


Basic Dashboard Walkthrough

Let's make a deployment maps of sorts that will show all the poles that need to be collected in the field.


Create a job to contain all the poles that need to be visited in the field and name it "Deployment Map." Then visit the Project Management page to create the configuration for the dashboard job we just created.


Next we'll configure the job filter. Filtering jobs is required. (If you want data from all jobs to be used in your dashboard, you can use one block in the Logic Editor that is simply a logic block with a Boolean value of "TRUE.") We'll filter jobs by only pulling in data from jobs with a specific job model.


If you need to see how to type the job model name in the Logic Editor so it understands which job model you're referencing, you can use the Debugger's "Inputs" to see a job's structure. Here we've selected "Demo Dashboard Job," and in the "Current Available Dataset" to the left, we'll look under 'job,' and then look under 'job_creator' for the model's name, which in this case, is 'sprint_demo_company.'


In the Logic Editor for the Job Filter, we'll first use an Expression block. We'll use the operator "Equal;" you can search for the operator by clicking on the "Operator Type" input and then click on the "Equal" operator in the search results. You'll see "Equal" appear next to the small "Operator Type" text.


Then the first child block will be a data type, so click the "Data" option. This is where we'll type out the path: "job.job_creator".


Finally, the second block will be a literal, so choose "Literal." Keep the "Value Type" as Text. This is where we'll type the name of the job model the exact way it appeared in the Inputs we looked in previously. For my case, I'll be using "sprint_demo_company".


For the Node Filter, we want all the nodes that haven't been fielded yet. These nodes won't have the "Field Completed" attribute, since this attribute is added when a Fielder clicks "Done" on a pole once they've collected data.


We'll start with an expression block and use the "Does Not Exist" operator. The first child block under that will be the data path; select "Data" and use the path "node.attributes.field_completed".


If you want to include nodes that have already been fielded but need a pickup, you'll want to make your Logic Editor match the image above. Start with a "Logical Or" expression. Inside that, you can put the same blocks we constructed previously (a "Does Not Exist" expression with a data block under that with the path "node.attributes.field_completed"). Then you'll also want to put in the new combination, using an "Equal" expression with the two blocks: a data block with the "node.attributes.pickup_required" path and a literal block with the text "field visit required".


Now that we've configured our dashboard map to look at nodes that aren't fielded (as well as pickups) within the job(s) that have the job model we've specified, we can configure the attributes that we want to show on these nodes in our Deployment Map.


Let's make sure the nodes we're pulling in also pull in their node type and the job that they're from. That will require some simple mapping of attributes. We can also include a link to the job; we'll be computing this attribute.


To start mapping our attributes, we'll choose "Node Type" from the drop down to be used in our dashboard map. We'll use Logic Editor to tell the software to use whatever data is in the data path "node.attributes.node_type" in the job(s) we're pulling from to populate the "Node Type" attribute.


Next, we can use the "Job Name" attribute to store which job the node is from. We'll choose "Job Name" from the drop down and, in the Logic Editor, we'll enter a data path "job.name" to grab that value.


To include the link, there is no default attribute for us to use. (For "Node Type" and "Job Name," there were default attributes that already existed for us to use.) So we'll click "Finish Creating Dashboard" to save our progress and then navigate to the Model Editor to create a "Link" attribute.


Once you're on the Model Editor page, you should already be in the "Attributes" section. Click the blue "+" to get started. Choose a name for the attribute (I chose "Link"). The two important steps here are to select the "Hyperlink" for the Attribute Input Type and make sure it can be added to "Nodes." We can Skip and say "No" to all the other steps, then click "Finish Creating Attribute."


Going back to our Deployment Dashboard Configuration in the Project Management page, we can now add the "Link" attribute.


You may or may not have noticed this, but Katapult Pro uses the Job ID (within the first orange box) with a "#" in front of it and the Node ID (within the second orange box) with a "/n" in front of it. So for each node, we'll be able to use the above link, replacing the "-N_9o3H3_PmKWs88JDfz" with the Job ID (using a data block) and "-OKRkji6P2b5OJc57Rxr" with the Node ID (using a data block).


To combine texts (the "https://katapultpro.com/map/#," the Job ID, "/n," and the Node ID), we'll use the Concatenate operator. (If you have a different domain than katapultpro.com, make sure you're entering the domain you're using.)


Since "https://www.katapultpro.com/#" will always appear first for the link, we can put this in a Literal block and keep the Value Type as "Text". Next comes the Job ID, and this may change from node to node, so we'll use a Data block to grab whatever that value is. The data path to use is simply "job_id." Next will always be "/n", so we'll put that in a Literal block with the "Text" Value Type. Finally, we're grabbing whatever the Node's ID is by using a Data block with the data path "node_id."


For Photo Filter, we'll simply click "CONTINUE" and then hit that "Finish Creating Dashboard" button. Congratulations, you've created your first dashboard map and its configuration!


For now, we'll have to click "Bulk Update" for our dashboard map to populate. Click the "Bulk Update" button towards the top of the page to open the below dialog window.


Keep that first checkbox unchecked, and then select which dashboards you would like to update before hitting "Bulk Update." Go to the dashboard map, and you should see it populate!


Thanks for reading! For any questions, reach out to our Support desk at 717-430-0910 or support@katapultengineering.com. How can we improve our documentation? Leave a comment below!



Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page