Navigate back to the homepage

Interactive reports in the Cloud

Cristobal Aguirre
April 30th, 2019 · 7 min read

Motivation

Reporting sucks. It feels kind of silly how easy it has become to access the very bleeding edge in some technologies—building and training neural networks with millions of parameters, setting up intricate data processing pipelines, getting creative with visualization libraries—only to have it all culminate in a shitty powerpoint presentation or a bland pdf document. And then when someone asks for an update 2 months later…🤯🔫.

“What about Jupyter notebooks?“ you must be wondering… they’re a great tool to showcase an analysis piece—think kaggle kernels—but I think they’re closer to replacing a standalone project rather than something that you would send to clients or your boss with some frequency.

Another good alternative is Plotly’s Dash:

Written on top of Flask, Plotly.js, and React.js, Dash is ideal for building data visualization apps with highly custom user interfaces in pure Python. It’s particularly suited for anyone who works with data in Python.

I’d say Dash is a great tool if you need more advanced visualizations, but it’s definitely an order of magnitude more involved because besides all of the additional coding required, you need to figure out hosting. Heroku is a great option in terms of simplicity, but it doesn’t come cheap.

This tutorial will show you how to set up a data pipeline to ingest and clean some data, and then present it with nice, interactive dashboards that can be freely shared and exported to pdf. Once you’re done, you’ll have online reports with interactive visualizations and filtering, that will be a breeze to update and share with you’re end users.

The pipeline

Our first task is setting up a data processing pipeline, which will run periodically to ingest the data and clean it up. Maybe it’s in a SQL database, or it gets dumped in some folder or S3 bucket as CSV files, maybe you have some code scraping a website, maybe you hook up to an API… this step is obviously going to be completely project dependent, but the general workflow is always the same; read the data into pandas dataframes, process it (add/remove columns, fill missing values, merge and denormalize, etc.), and then export to google sheets. This is an ooold workflow traditionally referred to as Extract Transform Load.

Getting data

On this occasion, we’ll be using the NYPD motor collisions data. It gets updated daily, and each record represents a collision with information like precinct, borough, involved vehicles, etc.

We’ll write a python script that gets and cleans the data. This is where you’ll denormalize it if you’re reading from multiple tables (in case you’re reading from a SQL database), add new columns, clean them up, etc. I like doing this step in Jupyter notebooks first, which allows me to interact directly with the data and see what needs to be done. Once I’m more familiar with the data and happy with the transformations required, I’ll put them all together into a script like this one.

1# data_processing.py
2import pandas as pd
3
4def get_data():
5 """ Gets data. Returns a pandas df """
6 base_url = 'https://data.cityofnewyork.us/resource/qiz3-axqb.json'
7 df = pd.read_json(base_url)
8 return df
9
10def clean_data(df):
11 """ Cleans data. Returns a pandas df """
12
13 # Make a copy so the function remains pure
14 df = df.copy()
15
16 # Let's drop some columns that we won't use
17 drop_cols = [col for col in df.columns if col.startswith(':')]
18 drop_cols.append('location')
19 df.drop(drop_cols, axis=1, inplace=True)
20
21 # Looks like the street names need to be stripped
22 street_cols = [
23 'cross_street_name',
24 'off_street_name',
25 'on_street_name',
26 ]
27 for col in street_cols:
28 df[col] = df[col].str.strip()
29
30 # Create lat,long col
31 df['coordinates'] = df['latitude'].astype(str) +
32 ', ' +
33 df['longitude'].astype(str)
34
35 return df
36
37def etl():
38 """ Main function. Gets and cleans data. Returns a pandas df """
39 df = get_data()
40 df = clean_data(df)
41 return df

There’s nothing very fancy going on in this file, and what you do with the data depends on what you have so there’s no point in covering it here. What does deserve some attention is how you structure the process. Here I split the work into 2 different functions, one to load the data and another one to transform it. Then I put both together into a third function that I’ll call later for the full process.

The benefit of splitting it into separate functions is that it makes it much easier to test, debug and maintain. If your cleaning process is more complex than what I have here (and most likely it is) it may make sense to split things further into individual functions for each step. If you’re coming from a Jupyter notebook where you did the initial exploration, it will probably be a very direct conversion from each code bock into its own function.

Exporting to Google sheets

Next, we need to export the data to google sheets. To do all the heavy lifting we’ll use the great library gspread, which is very well documented and easy to start using right out of the gate. But we’ll only be using a small portion, that will allow us to dump the data into a pre-determined sheet.

First thing is getting credentials from the google developers console. This section of the gspread docs covers it but I’ve found it to be a bit outdated—or at least partially incomplete—so I’ll cover it here as well. UI’s are constantly changing so I won’t rely to heavily on it, most steps are relatively easy to figure out.

  1. Go to the google developers console and create a new project—call it whatever you want.
  2. You’ll be taken to the project dashboard. Right there in the middle you should see a link that says “+ ENABLE APIS AND SERVICES”. Go there, search for “Google Drive API” and click on enable. Then go back and search for “Google Sheets API” and enable as well. Go back to the main project dashboard.
  3. To the left, you’ll see a menu that has 3 items as of this writing: Dashboard, Library and Credentials. Go to credentials, and create a new service account key.
    Create google API new service account key
  4. Give the service account a name, and give it the “Editor” role (note that the ‘sodium-task-xxx’ name was randomly generated by google).
    Creating the service account credentials
  5. Save the output to your project directory as google_secrets.json
  6. Now you must create the google sheet, and share it with the email created for that service account (in my case python-gsheets@sodium-task-240704.iam.gserviceaccount.com). You can also find that email address inside the json file you just downloaded. You’ll realize you forgot this step if you later get a SpreadsheetNotFound error when running the code.
    Share sheet with API account

Once that’s done, we’ll define some functions that will help us create, open and write to sheets. Put these functions in a file that we’ll call from the main one. The main function we’ll call here, paste_csv, will go to a given tab within a given spreadsheet, delete all contents and paste a given CSV string. You can adjust this code to append the data to the bottom instead of overwriting—if you want to preserve the data—although I would suggest against it.

If the goal was collecting historic data instead of (or in addition to) just reporting on the current status, I would edit the process to append the pandas dataframe to the bottom of a CSV file and then write another piece of code that reads the data from this file and dumps whatever you need into the google sheet—basically separating the tasks of building a historic database and reporting on it. I prefer this approach because google sheets isn’t really made to handle millions of rows, so even if you start small your dataset will eventually grow enough that you’ll have to make the switch anyways—why not do it from the start.

1# gspread_utils.py
2import gspread
3from oauth2client.service_account import ServiceAccountCredentials
4
5
6def get_client(creds_file='google_secrets.json'):
7 """
8 use creds to create a client to interact
9 with the Google Drive API
10 :param creds_file: 'clients_json
11 :return: gspread Client object
12 """
13 scope = ['https://spreadsheets.google.com/feeds' +
14 ' ' +
15 'https://www.googleapis.com/auth/drive']
16 creds = ServiceAccountCredentials.from_json_keyfile_name(creds_file,
17 scope)
18 client = gspread.authorize(creds)
19 return client
20
21
22def get_last_row(tab):
23 """ Helper function to find last row with data """
24 last = tab.row_count
25 if last < 1:
26 return 1
27 return last
28
29
30def get_last_col(tab):
31 """ Helper function to find last col with data """
32 last = tab.col_count
33 if last < 1:
34 return 1
35 return last
36
37
38def clear_contents(tab):
39 """ Delete all data in a given tab
40 (must be a gspread sheet object) """
41 last_row = get_last_row(tab)
42 last_col = get_last_col(tab)
43 range_of_cells = tab.range(1, 1, last_row, last_col)
44 # This section is not very performant (to put it mildly) but it
45 # still only takes a few seconds for thousands of rows. While the
46 # goal of this script is to run offline, this is not a big issue.
47 for cell in range_of_cells:
48 cell.value = ''
49 tab.update_cells(range_of_cells)
50
51
52def paste_csv(csv_data, sheet,
53 tab_name='Sheet1',
54 starting_cell='A1'):
55 '''
56 Deletes content in sheet (if exists) and pastes csv data
57 csv_data - csv string
58 sheet - a gspread.Spreadsheet object
59 cell - string giving starting cell
60 '''
61 # If the tab doesn't exist, create it
62 try:
63 tab = sheet.worksheet(tab_name)
64 except:
65 tab = sheet.add_worksheet(title=tab_name,
66 rows="20", cols="5")
67 clear_contents(tab)
68
69 (first_row, first_column) = gspread.utils.a1_to_rowcol(starting_cell)
70
71 body = {
72 'requests': [{
73 'pasteData': {
74 "coordinate": {
75 "sheetId": tab.id,
76 "rowIndex": first_row-1,
77 "columnIndex": first_column-1,
78 },
79 "data": csv_data,
80 "type": 'PASTE_NORMAL',
81 "delimiter": ',',
82 }
83 }]
84 }
85 sheet.batch_update(body)

Now let’s put both pieces of code together into a third file.

1# etl_gsheets.py
2import gspread_utils
3import data_processing
4
5def main():
6 """ gets clean data and dumps into google sheets """
7
8 # Get data
9 df = data_processing.etl()
10
11 # Send to Google sheets
12 client = gspread_utils.get_client(creds_file='google_secrets.json')
13 # MyData is the name of the google sheet
14 sheet = client.open('MyData')
15 gspread_utils.paste_csv(df.to_csv(index=False),
16 sheet,
17 tab_name='Sheet1',
18 starting_cell='A1')
19
20if __name__ == '__main__':
21 print('Starting..')
22 main()
23 print('All done!')

Now for the moment of truth, go to the shell and run the code:

1$ python etl_gsheets.py

If all goes well, you should go to your google sheet and see the data there. Hooray! Now whenever you run this command, the entire pipeline will kick in, getting the data, cleaning it up and dumping it in the Google Sheet you gave it.

Data Studio Dashboard

Now we’ll create a data studio report and populate it with the data we just created. The tool is a very simple, yet powerful visualization tool that allows easy report creation and sharing. I’d say the sharing portion is the standout feature: you can share it freely and aren’t restricted like with paid products (Power BI, Tableau, etc.). Another feature I like a lot is how easy it is to embed them, as they make an html snippet readily available to copy and paste wherever you want (scroll to the bottom to see mine). While you have a much more limited spectrum of available visualizations (no fancy force graphs here) I find it’s offering is almost always more than enough—remember the goal here is replacing an excel chart anyway.

I won’t cover it’s usage too much here—you just have to play around with it. Unfortunately, you’ll find that most users come from google analytics and therefore most of the data handling is already taken care of for them because you can connect directly to it—and other data sources too, which is actually pretty neat. However, this means it’s harder to find help online—the [google-data-studio] StackOverflow tag has just 555 questions as of this writing.

Creating the report

  1. Go to https://datastudio.google.com and create a new report.
  2. Click on “Create new data source” or something like that. It will probably be to the right.
  3. Scroll down until you find “Google Sheets”. Click there, and look for your newly created spreadsheet, and then click on “Connect”.
    Open data source in google data studio
  4. You’ll see all your columns there, colorized the way data studio breaks down your data: greens are dimensions and blues, metrics. In very simple terms, dimensions are columns that represent categories through which you can slice your data (country, date, sector, etc.) and metrics are the values that you want to visualize (income, temperature, etc.). You can adjust the data types if any was read incorrectly, and define formatting and aggregation preferences (average for percentages, sum for money, etc.). You can also create new fields, but I found this to be much easier to do in the python data processing layer. This also applies to blended data, which is a data studio feature that allows merging data sets like classic SQL JOINS, but again, I prefer doing all of these within python and just dumping the data ready to visualize.
  5. Once in the main panel, you can switch between the EDIT and VIEW modes to edit your report. I won’t cover all the features here, but encourage you to just play around with it—it is mostly pretty intuitive and if you can’t figure out how to do a specific thing (apply filter to a subset of charts, add default to dropdown list, etc.) you’ll probably find the answer in google. While the data connection side has small online support because most people use it for google analytics, the report building and visualizations community is much more active because of this as well.
  6. Each object (chart, filter, text box) has a DATA and a STYLE tab in the control bar to the right, where you can determine what the object will display and formatting. Depending on the visualization, you can choose one or more dimensions, which are the green-colored columns that correspond to categorical variables, and metrics, the blue-colored ones that represent values. You can also select a “Date Range Dimension” that allows you to use the “Date Range” filter, in addition to regular filters.

Finished product

As mentioned earlier, reports are very easy to share and publish. You can set rules to define who can view or edit it, and get an html snippet to embed into websites as you can see below. Note the “Full Screen” symbol on the bottom-right corner (so you don’t have to squint).

Next steps

Now you’re done with the basic version. Want to kick it up a notch and go fully automated? move this code over to an AWS Lambda function and set it to run on a fixed schedule or whenever your data updates.

More articles from Cristobal Aguirre

Twitter Word Association Game

Fun with the Twitter Search API and AWS serverless functions

August 28th, 2019 · 2 min read

De REST a GraphQL con Apollo

Un post lleno de contenido, que te va a llevar desde 0 a una aplicación moderna, usando GraphQL para acceder APIs externas y publicada en netlify de manera gratuita

August 12th, 2019 · 11 min read
© 2019 Cristobal Aguirre
Link to $https://twitter.com/jcaguirre89Link to $https://github.com/jcaguirre89