top of page

Exploratory Research on SmartEQ

PROJECT SCOPE

Discover user needs, pain points and attitudes towards smart EQ to inform the development of a new AI product feature

  • Brand: TC-Electronic

  • Timeframe: 8 weeks

  • My Role: UX Researcher

  • Team: Product manager, UX designer, 2 machine learning engineers

  • Methods: Survey, Competitive Analysis, Usability Testing, Prototype evaluation

  • Tools: Useberry, AdobeXD, MS Forms, Miro, Teams

Project Overview

SmartEQ_process4.png

Meeting with stakeholders

Because the product could be used by different user groups (e.g., musicians, sound engineers) with different levels of expertise (e.g., beginner -professional), it was essential to identify if the user groups had different needs from this technology. The team had several assumptions about the needs of different customer segments that had not been tested. Additionally, when our machine learning engineers started working on this project, they raised questions and required explicit requirements from the product team. The product team was not able to answer the engineers’ questions because the technology was novel, and they had not done sufficient research in this area to derive design solutions and validate the assumptions that they had. The absence of specific requirements quickly became a blocker for the engineers who needed more precise directions to make critical technical decisions.

Some assumptions about the needs of different user groups

Less skilled users need simpler user interfaces with very few controls

Less skilled users need less explanation and ability to rectify AI recommendations 

Visual feedback that help users understand why the AI has made particular recommendations is more useful to advanced users 

Professional users need full control over the interface and ability to rectify overwrite the recomendation of the AI model.

At this point, it was clear to me and the TC electronics' UX lead that we needed to conduct user research to understand the user needs better, test the assumptions the project stakeholders had, define the requirements, and drive forward the development of the feature. 

 

 

We had no product to test, we needed the results fast, and we had limited resources, i.e., I was the only UX researcher on the project and had other concurrent research projects.

Project constrains

Research plan

I began by doing brief desk research to identify relevant previous work. I came across a very relevant paper by Microsoft Research (Tianyi Li et al. 2023), which provided both an interesting method for testing interaction design concepts early in the development process and a set of validated UX metrics for assessing users' feelings, trust and perceived product quality. Drawing on the methods from this paper and an earlier paper by Microsoft that provided guidelines for the design Human-AI interaction (Amershi 2021), I had most essential elements to begin writing the research plan.

I devised a research plan, which I presented and refined with the team. The plan consisted of three user studies:

  1. A traditional survey to gather data on user perceptions and attitudes towards AI-augmented equalisers and audio effects.

  2. A usability study using similar competitors' products as a proxy to learn.

  3. A prototype testing to measure the impact of different interaction design methods on specific UX metrics.

 

After a few iterations and given the constraints of the project (i.e., no product, little time, and resources), we signed off the plan.

User attitudes survey 

We conducted this research early during the discovery phase of the project to help inform product and business strategy. This research aimed to identify trends in consumer behaviour and gather insights into users' attitudes and experiences using AI-assisted equalisers and "Smart " audio effects.

Research questions
  • What are peoples’ attitudes towards smart equalisers?

  • What are peoples’ experiences using smart equalisers?

  • Do subjects’ feelings, trust, and perceived product qualities vary as a result of their age, subjects’ skill type, and level?

  • What do subjects with experience using smartEQ like/dislike about them?

ideas_edited_edited.png
Survey design
SoMe_study design.drawio.png

A screening survey is presented to the participants to ensure they have some experience with the type of music production software we are interested in. Participants who passed the screening test (marked as blue profile icons) will be directed to the survey task. The presentation order of the questions was randomized to minimise the order effect.

Data analysis of 261 consumers' attitudes towards AI in music production

Usability Study

Concept Validation

In collaboration with the product stakeholders, we decided that in this context, we should focus on measuring the impact of three Human-AI interaction guidelines proposed in previous research (see Ameresi et al. 2016). I used AdobeXD to create four video prototypes describing interaction with the AI-Equaliser. 

  • Does contextual information and the ability to correct the AI outputs affect the users’ feelings, level of trust, and perceived usefulness of the system?

  • Which interaction methods do users prefer and why?

  • What are participants’ perceptions and reasons for preferring one product over another?

  • What do they like/dislike about the different interaction methods?

Research questions
Prototype Design

 I used AdobeXD to create four prototypes exploring different interactions with the AI-Equaliser. I manipulated three UX features in the design of the prototypes:

  1. Level of control the user has over the automation,

  2. Ability to correct and dismiss the AI's recommendations,

  3. Provision of contextually relevant information that the UI provides to help the user understand how the AI derives its suggestions. 

The design of the four prototypes varied in terms of the following aspects

Study Design
study_procedure.drawio.png
Data analysis

As shown in the figures below, users strongly preferred the Semantic Graph followed by the Tone Match prototype. The magnitude of this effect was not correlated to the user skills. The survey's qualitative results show that participants prefer the Semantic Graph interface because it is more transparent and intuitive and provides semantic explanations, bridging the gap between hearing and thinking. They found the interface simple yet intuitive and educational, providing rich visual feedback and contextual information.

Recommendations

  • Removing the users' ability to control the EQ has a negative effect. The findings show that, in most cases, removing control affects users' feelings of control, adequacy, usefulness, and perceived product quality.

  • Provide sufficient contextual information. Participants' responses suggest that visual feedback helps the user understand the decisions of the AI model to build trust, decrease suspicion towards the AI, improve usefulness, increase users' level of confidence in the recommendations, and influence their behaviour.

  • Intuitive contextual information and AI model explanations.  The analysis of the data from the open questions suggests that users value more semantic explanations using terms that are closer to human perception and make associations between the physical attributes of the sound and their perceived quality. So, using higher-level semantics rather than signal-level explanations is preferable, but this might be context-dependent.

  • Design for user customisation of AI suggestions. Letting the user optimise suggestions is important because it is difficult for the machine learning model to anticipate users' intentions and stylistic preferences as well as take into account the broader musical context (i.e., how the sound

      will fit in the music piece). So, we should not assume that the model's output will be acceptable to the user. Hence, the user                should be given the option to adjust the parameter settings until they achieve the desirable results.

  • The adaptability of the model to the user style is important. The findings from one of the survey questions suggest that the model's adaptability is important to most users. Adaptability, either through re-enforcement learning or another mechanism (style transfer, constraining the model, etc.), could help personalise the models' outputs and account for the stylistic and aesthetic preferences of the user.

Impact and Reflections

Project impact
  • The findings helped us identify our competitors' products' strengths and weaknesses.

  • Helped design and engineering to make important decisions.

  • Identified the likelihood of different market segments adopting AI software tools, which informed product and marketing strategy.

What did I learn?
  • A factorial survey using low-fidelity prototypes early in product development has proven a very effective and low-cost way to elicit product requirements and assess users' attitudes.

  • When the findings between complementary studies converge, it increases stakeholder acceptance and confidence in the results.

bottom of page