A platform to assess the quality of a dataset with the help of various sensors such as LiDar, Radar or a Camera for autonomous cars. This program was part of the innovation team and the following case study goes through how the problem of assessing the quality of a dataset is solved through a digital interface.
This program had a small team of two where initially I was working as an Interaction Designer but as the program moved on and the requirements increased I transitioned to a Visual Designer as well.
In today's day and age where self-driving cars are being produced by companies such as tesla, waymo, zoox, there still remains an absence of a good platform to accurately and efficiently check and assess the quality of a dataset being provided by those autonomous cars. To understand this problem better I prepared a questionnaire for the client and conducted an interview session with them to get to know about the problems they are currently facing with the existing solutions.
UNDERSTAND / KNOWLEDGE TRANSFER
Before jumping right into designing the screens we felt that it was important for the design team to get an understanding of what the program is about and get a better idea of the client's expectations and motivations. This session also helped us get an idea of the amount of work that would be required and help set a realistic timeline for the same.
STEPS TO ASSESS QUALITY OF A DATASET
Understanding how the entire process of assessing dataset quality worked, helped us immensely in getting an idea of who all our target users were and how'd they fit in the entire process. This session also helped us get an in-depth understanding of their entire process and what are the most time consuming steps.
Once we had a clear understanding of the system and the process workflow we did a round of stakeholder interviews with the Principal Systems Engineer, Tech Lead, VP of Engineering and the CTO. Since this program has been in the research phase for a long time the purpose of the interview was to get some insights on their key learnings, observations, motivations and expectations.
PAIN POINTS GATHERED FROM THE INTERVIEW
1. A lot of data is being repeated in the current scenario.
2. No way to view a consolidated / overall assessment of the data.
3. Not much evaluation is happening.
4. Difficult to interpret data / graph. Becomes very confusing.
5. Very little json analysis and no relational analysis.
UNDERSTANDING OUR USERS
With the help of our first and second session we had a clear understanding of who our users will be and how will they be involved in the process. We categorised the personas under three categories; Primary, Secondary and Tertiary and under each category we further divided them under three categories; High Level, Mid Level and Deep Level which simply means the level of involvement they will be having with the platform.
SETTING UP THE STRUCTURE
Once we had enough information on each of our personas and how they will be involved in the platform, we spent some time on creating the information architecture. This was a rather long and super interactive session between the design team and the stakeholders. It helped initiate conversations and discussions around the pain points and areas of improvement across the entire journey. We had constant back and forth review sessions with the stakeholders to gather their feedback and to make sure that everybody was on the same page.
GETTING INTO THE DETAILS
After setting up the foundation for our platform, I worked on creating the user flows for each persona and ideated countless paths and flows that each persona would have to take to finish the journey right from the start to the end in the most optimum way possible and at which step they will have the most involvement in. This helped us in understanding the number of steps a particular user would have to take to achieve their task and at which steps they will have to interact with the other user group.
IDEAS TO ACTION
Once our user flows were completed and approved by the client, it was time to get our hands dirty. I sketched countless ideas and brainstormed various possibilities along with my team to create low-fidelity wireframes and and since this was a graph/data heavy platform, we utilized the time to research on the best possible graphs we could use for the same.
UNDERSTANDING THE WIREFLOWS
After the wireframes were done and dusted, I spent some time working on the wireflows and refining the graphs further to see the level of involvement each persona will have on the screens and how each persona will affect the other. To further validate the flows, I conducted a session with the stakeholders to get their final feedback on the overall screens and graphs before finally working on the visual designs.
Since we were working under a very tight deadline and a lot of deliverables had to be shipped out, we started working on the component library relatively early alongside all the sessions that were being conducted since we got an idea of all the basic components that we were going to use on the platform. This saved us almost 4-5 days worth of extra effort and we were very proud of this decision.
FINAL VISUAL DESIGNS
1. DATASET OVERVIEW
2. SEGMENT OVERVIEW
3. ACCURACY DETAIL PAGE
4. ADEQUACY DETAIL PAGE
5. COMPLETENESS DETAIL PAGE