My Role

The work was divided pretty evenly amongst the group members. However, some processes I contributed to were:

Research

Evaluation, Validation

Creating the design style and updating the styles on figma

Prototyping


This project was worked on by:

Riya Phalak, Riya Jejani, Shruthi Challa

The work was divided pretty evenly amongst the group members. However, some processes I contributed to were:

Research

Evaluation, Validation

Creating the design style and updating the styles on figma

Prototyping


This project was worked on by:

Riya Phalak, Riya Jejani, Shruthi Challa

The work was divided pretty evenly amongst the group members. However, some processes I contributed to were:

Research

Evaluation, Validation

Creating the design style and updating the styles on figma

Prototyping


This project was worked on by:

Riya Phalak, Riya Jejani, Shruthi Challa

1. Discovery and Research

Defining The Flows

Defining The Flows

First, we extensively looked through the CBSE website and narrowed down to 4 major existing flows that would be visited often by the users. Additionally, we also defined our user groups and made respective personas.

First, we extensively looked through the CBSE website and narrowed down to 4 major existing flows that would be visited often by the users. Additionally, we also defined our user groups and made respective personas.

1.1 Choosing the tasks to work with

1.1 Choosing the tasks to work with

Tasks Chosen

Tasks Chosen

1.2 Persona Creation

1.2 Persona Creation

User Groups

User Groups

2. Analysis

Evaluation

Evaluation

After deciding what flows to analyse, we used Nielsen’s 10 Heuristic principles to evaluate each screen within the flows. All three members evaluated the screens individually after which we sat together to compare our findings together.

While some heuristic violations we spot were similar, others were opposing. In that case, we went back to the strategy and decided whether the observation was even a violation or not and accordingly framed the heuristic evaluation.

After deciding what flows to analyse, we used Nielsen’s 10 Heuristic principles to evaluate each screen within the flows. All three members evaluated the screens individually after which we sat together to compare our findings together.

While some heuristic violations we spot were similar, others were opposing. In that case, we went back to the strategy and decided whether the observation was even a violation or not and accordingly framed the heuristic evaluation.

2.1 Heuristic Evaluation

2.1 Heuristic Evaluation

Heuristic Violations

Heuristic Violations

3. Testing

Concept Validation

Concept Validation

With the findings from the heuristic evaluation, we created mid fidelity screens which were then tested with one pilot test participant.

With the findings from the heuristic evaluation, we created mid fidelity screens which were then tested with one pilot test participant.

3.1 Concepts for Pilot Test

3.1 Concepts for Pilot Test

Creating the first concepts

Creating the first concepts

3.2 Pilot Test

3.2 Pilot Test

Testing the first concept

Testing the first concept

3.3 Including changes from the pilot test

3.3 Including changes from the pilot test

Making High Fids

Making High Fids

The insights from our pilot test were incorporated into our high fidelity screens.

The insights from our pilot test were incorporated into our high fidelity screens.

TASK 1:

TASK 1:

TASK 2:

TASK 3:

TASK 4:

TASK 2:

TASK 3:

TASK 4:

3.2 Pilot Test

Testing the first concept

3.4 User Interviews

3.4 User Interviews

Testing the redone concept

Testing the redone concept

4. Validation Report

Validation Report

Validation Report

A summary of the findings from the interviews

A summary of the findings from the interviews

4.1 Semantic Differential Scale

4.1 Semantic Differential Scale

SDS - Semantic Differential Scale

SDS - Semantic Differential Scale

An SDS form was created to be sent to each participant after their interview was over. This form had questions about their experience interacting with the original CBSE website and our proposed concept.


The SDS we created included certain questions about the product with a measuring scale of 1 to 5 from which participants can chose their inclination.

An SDS form was created to be sent to each participant after their interview was over. This form had questions about their experience interacting with the original CBSE website and our proposed concept.


The SDS we created included certain questions about the product with a measuring scale of 1 to 5 from which participants can chose their inclination.

The results from the SDS forms is overlayed to give a better visualisation of the result.


Average score given for the original website is: 3.7


Average score given for the proposed website is: 4.15


We can see that the proposed flow has results leaning more towards the right than the original flow.

The results from the SDS forms is overlayed to give a better visualisation of the result.


Average score given for the original website is: 3.7


Average score given for the proposed website is: 4.15


We can see that the proposed flow has results leaning more towards the right than the original flow.

4.2 Insights

4.2 Insights

Insights

Insights

Through interview and the SDS results, we can see that the heuristics violations spot, were validated.


During the interviews, a lot of the comments made by the participants through the ‘Think Aloud’ method aligned and validated the heuristic violations.


Overall, it was found that most participants preferred the designs to have:

Visual uniformity

Main actions and filters being presented clearly

Reduction in unnecessary steps

Usage of words that are widely understood, especially for labels


A change that did not entirely work:

The filtering system, on the affiliation page, deviating a bit too much from the original flow was counterintuitive since CBSE had a good filter system. Other places like the result page, the addition of filters was appreciated since without them, the users had to scan through a long list to find the respective result link.


One of the most important things I learnt through this process was that at each step, we must keep going back to the strategy.

Through interview and the SDS results, we can see that the heuristics violations spot, were validated.


During the interviews, a lot of the comments made by the participants through the ‘Think Aloud’ method aligned and validated the heuristic violations.


Overall, it was found that most participants preferred the designs to have:

Visual uniformity

Main actions and filters being presented clearly

Reduction in unnecessary steps

Usage of words that are widely understood, especially for labels


A change that did not entirely work:

The filtering system, on the affiliation page, deviating a bit too much from the original flow was counterintuitive since CBSE had a good filter system. Other places like the result page, the addition of filters was appreciated since without them, the users had to scan through a long list to find the respective result link.


One of the most important things I learnt through this process was that at each step, we must keep going back to the strategy.

4.3 Afterthoughts

4.3 Afterthoughts

Afterthoughts

Afterthoughts

How can we make this better?

  1. Due to time constraints, not a lot of initial research was conducted. This is something I would like to improve on.

  2. Our participants were enthusiastic and prompted aspects of the design that they liked. They did not shy away from mentioning other aspects that they found were not needed/did not solve the issue. We prioritised issues to tackle based on our timeline and so resolving all these issues is our next step.

  3. For our interviews, we told the participants that they could fill the SDS form in their free time. This was a mistake since we ended up getting only 2 out of 4 responses. This is something i will be keeping in mind next time.

  4. One more thing that I will be keeping in mind is to keep going back and forth between the layers of UX.

How can we make this better?

  1. Due to time constraints, not a lot of initial research was conducted. This is something I would like to improve on.

  2. Our participants were enthusiastic and prompted aspects of the design that they liked. They did not shy away from mentioning other aspects that they found were not needed/did not solve the issue. We prioritised issues to tackle based on our timeline and so resolving all these issues is our next step.

  3. For our interviews, we told the participants that they could fill the SDS form in their free time. This was a mistake since we ended up getting only 2 out of 4 responses. This is something i will be keeping in mind next time.

  4. One more thing that I will be keeping in mind is to keep going back and forth between the layers of UX.

Let’s chat about work, and while we’re at it, would love to hear about your favorite plant:)

Let’s chat about work, and while we’re at it, would love to hear about your favorite plant:)