Big Data

Why Eliminating Ambiguity in Your Data Matters

By David Wegman, CTO @ Valor

The next time you strike up a conversation with your friendly neighborhood computer, take note of how long it takes before you get frustrated.  Despite the advances in artificial intelligence over the past decades -- and despite the incredible capacity humans have for adaptation -- human-computer interaction is unnatural (from the human perspective, anyway).  Every touch point where people provide input to computers, or receive output from computers, is an opportunity for misunderstanding.

 

Even as our systems are getting smarter all the time, there are some simple steps we can take to eliminate ambiguity.  Data architects serve an important role, helping to ensure that information is not lost or garbled in translation.  These techniques are essentially an investment.  Every minute spent on avoiding problems up front can save much more time down the road when things aren't working properly.

 

Date formats

 

Which came first, 3/7/2017 or 5/4/2017?  The answer depends on where you are in the world when asking the question.  In the United States, dates are commonly represented as month/day/year, so these dates would usually be interpreted as March 7 and May 4, respectively.  In many other countries, dates are represented as day/month/year, so they would be interpreted as July 3 and April 5, respectively.

 

This becomes problematic when a data file, which includes date information, is read by a computer system.  Each time the system encounters a field known to be a date, it must decide how to interpret the information.  Fortunately, most modern systems allow us to choose the format of the date for inputting and outputting dates.  However, if the date format is not chosen carefully, it can result in one of the most pernicious types of errors in computer systems: one which does not raise a flag immediately and lays dormant for some time.  A date which is incorrectly interpreted can result in a myriad of problems, as was widely publicized at the end of the last century.

 

Given enough data points, it may eventually become clear whether dates have been written starting with the month or day (e.g. if one of the values is 3/15/2017, the format cannot be day/month/year because 15 cannot refer to the month, so the format is probably month/day/year).  This approach is suboptimal because it requires an additional step which is not guaranteed to work properly in all cases.  A better approach is to avoid the problem altogether by taking care when choosing a date format.

 

To eliminate ambiguity when working with dates, when possible, use the format YYYY/MM/DD.  This represents a four-digit year, followed by a two-digit month, followed by a two-digit day.  March 7, 2017 would be represented as 2017/03/07.  This format is widely understood and eliminates the ambiguity that can occur when the year appears at the end.

 

Field delimiters

 

A common method for storing tabular data is in CSV (comma-separated values) format.  In a CSV file, each line contains one row of a table.  Within each line, a delimiter character appears in between each value, demarcating the columns.  The delimiter character is usually a comma or a tab.

 

A problem can arise when one of the values that needs to be stored contains the delimiter character.  For example, a person's name may contain a comma (e.g. "Martin Luther King, Jr.").  In this situation, a line containing this value will contain an extra delimiter character.  Software which treats each occurrence of the delimiter as a new column may be confused by the fact that the number of columns is inconsistent from one line to another.

 

One strategy is to choose a delimiter character which does not appear in any of the values.  This technique can help minimize problems, however it is not guaranteed to completely avoid them as new data files are created in the future.  A better approach is to wrap values that may contain delimiter characters in double quotes, and to ensure that literal double quote characters are specifically labeled (or "escaped," in programmer speak) using a backslash character.  This ensures that the data file will be parseable regardless of the data that needs to be stored.

 

 

waterunit.png

Units of measure

 

Sally's water meter recorded 350 gallons of water used.  John's recorded 200 cubic feet of water used.  Who used more water?  This is a question with a simple answer (John did).  But what if the units were not specified?  If all we know is that Sally used 350 and John used 200, we might decide that Sally used more, but only if we first assume that their meters record water using the same units.  Even if that assumption is correct, if we don't know the units, we won't be able to bill properly for the water or compare the quantity to an amount stored in other systems.

 

Quantitative values (i.e., measurements) should always have units specified.  When preparing a data file, you can provide information about units as a separate field.  For example:

 

Alternatively, units can be provided in documentation which accompanies the data.  One advantage of including units information inline in the data is that anyone with the data will automatically have units information, even if the documentation is not accessible.  Another benefit is that the units can vary from one record to another, as in the earlier example of two different people whose water meters recorded in different units.  However, in some cases it may not be practical to provide units inline, and good documentation can help fill this gap.

 

Keep clear and carry on

 

Data parsing errors are not unusual.  However, with a small investment they can be minimized.  By avoiding common data pitfalls and making the right choices at the outset, you will eliminate unnecessary troubleshooting and set yourself up for success.

How Dashboards Helps Decision-Makers at Water Utilities

How Dashboards Helps Decision-Makers at Water Utilities

By Renee Jutras, Full Stack Developer

Data has become part of the way we tell stories today. Online articles use maps and graphs to add a splash to their stories because, as they say, “a picture is worth a thousand words”. And it’s true - a well thought out data visualization can convey much more information than just a description, and let the viewer draw their own conclusions about the information. The difference between a clear positive trend and a potentially coincidental trend is instantly recognizable on a graph.
Dashboards take graphs even further by adding organization and interactivity. The best dashboard helps you continuously monitor whatever your pain points are while giving you the power to explore your data visually as freely as possible.

In order to take water utilities further into the future, better technology is needed. Valor Water Analytics’ dashboards put vital information at the fingertips of the decision-makers at utilities, so that they can start to make actionable decisions based on their data.

Mind The Data Gap

There is a lot of talk about Big Data these days, and if you are anywhere near Silicon Valley, it is more like a deafening roar. As important and difficult as Big Data is, we are also working to solve challenges of a different sort that gets relatively little attention: what we call Diverse Data. The key distinction between Big Data and Diverse Data is how structurally similar one piece of data is to another.

Water Energy Nexus Work in California

In February 2016, The California Public Utilities Commission (CPUC) adopted Decision 15-09-023 which provides a set of analytical tools to quantify the benefits of water savings. The purpose of one of the tools, the water energy nexus calculator is to enable the CPUC, Investor-Owned Utilities (IOUs), and other stakeholders to quantify and capture ‘embedded energy’ savings stemming from water conservation programs. In a follow-on decision, the CPUC issued Assigned Commissioner’s Ruling Regarding Advanced Meter Infrastructure Pilot Proposals and Setting Workshop (November 20, 2015).

Using Big Data to Improve Water Utility Revenues with Valor Water President Christine Boyle

Dig deeper into Valor Water through The Water Values Podcast!

During this episode, you will learn more about:

  • How Christine started Valor Water
  • The University of North Carolina’s water program
  • The four platforms Valor Water has commercialized for utility revenue
  • Using software to find anomalies in the water distribution system
  • “Real losses” from the water distribution system
  • Using software to identify customers with potential payment problems
  • How Valor Water identifies and segments customer classes for messaging about bill payment and cut-offs
  • Using software to assist utilities with rate optimization and climate change planning
  • “Stress testing” utilities for climate change scenarios
  • Using software for conservation planning
  • Meter fleet management applications
  • The different data points and types that Valor Water uses
  • How Valor Water helps utilities “unleash the power of their data”

When Big Customers Make Big Changes

Valor Water Analytics recently built Non Residential Customer Sales Dashboards for four utilities in North Carolina. Read about the project’s Plateau Analysis and how these utilities are putting findings into action on project partner University of North Carolina’s Environmental Finance Center blog, here: When Big Customers Make Big Changes.

Untangling Public Benefits of California’s Prop. 1

On November 4, 2014, California voters overwhelmingly approved Assembly Bill 1471, Water Quality, Supply, and Infrastructure Improvement Act of 2014, more commonly known as Proposition 1 or the Water Bond. This bill unlocked $7.12 billion in bond funds to pay for water projects throughout the state. Many aspects of the bill were written in rather general language, and this and subsequent pieces aim to unravel key elements of the bill.

Excellent descriptions of the full bill have been written by the Association of California Water AgenciesSPUR, and the Legislative Analyst’s Office, as well as others.