1. Can you share a little bit around your background? How has your career progressed? What led you to get into your chosen area of AI?
I have always been interested in music and language from practical, cognitive and computational perspectives. Growing up, I learned to speak several languages, and play many musical instruments.
Even though I was trained in engineering, I was highly interested in studying the similarity and differences between music and language – both highly structured sound systems – in cognitive neuroscience. I started working in the Music Technology Group at UPF in 2013; I really started to go down a distinct path - AI for music and NLP. I’ve pursued this same path ever since.
2. What do you enjoy most about working in AI?
Enabling stunning experiences and extracting insights from unstructured data.
3. In your opinion, what are the biggest drivers of Edge AI demand?
Edge AI has many advantages, including the speed and flexibility on network connections, as well as mitigating privacy concerns. The availability and feasibility of AI on IoT devices is one big driver – including the advantage that you don’t need to take out your phone constantly.
4. What is the biggest challenge holding Edge AI back?
Lots of progress has been made in this space, such as pruning DNNs and quantization compressions. In many cases, we are able to achieve high accuracies with fractions of the model sizes with tradeoffs in memory and compute. However, in particular applications (and, for more complicated models and features) we still face challenges to pipe a fair amount of multi-channel, multi-sensory data and make the models work with a stringent latency requirement.
5. What are three ways in which Edge AI is reshaping the future?
People will experience products with better privacy, faster processing and more widespread availability of AI features in their daily lives.