Imagine a museum that doesn’t just showcase art — it listens, understands, and reacts to its visitors.
In our latest university IoT project, we set out to transform traditional museums into interactive, emotion-aware environments by combining IoT devices, voice-based sentiment analysis, and real-time data visualization. The goal? To turn visitor conversations into meaningful feedback — not through rigid surveys, but through natural speech.
The Challenge: Understanding Emotions Without Asking
Traditional sentiment analysis relies on written reviews or forms, often detached from the visitor’s real emotional experience. But what if we could capture authentic reactions, as they happen?
This project proposes a Smart Museum framework that listens to ambient conversations near artworks using strategically placed microphones. The captured audio is then analyzed using NLP techniques to determine whether a visitor’s sentiment is positive, neutral, or negative.
Two Approaches, One Vision
To explore different hardware contexts, we built two complementary systems:
PC-Based Java Implementation
A powerful desktop station located in reflection zones (e.g., lounge areas) collects voice data and provides real-time feedback via a dashboard. Here’s how it works:
- Speech Recognition: Vosk transcribes audio in real-time.
- Sentiment Analysis: Depending on language, either Hugging Face APIs (for Italian) or Stanford CoreNLP (for English) analyze the emotional tone.
- Visualization: Node-RED processes data, stores it in InfluxDB, and visualizes it via Grafana dashboards.
- Communication: MQTT ensures lightweight and reliable messaging between components.
Arduino-Based Python Prototype
Using an Arduino Portenta H7 with Vision Shield, this version focuses on lightweight, near-artwork sentiment detection:
- The Arduino captures raw audio near specific artworks.
- Audio is streamed to a PC where a Python script handles transcription via Vosk.
- A local Hugging Face model (run via
transformers
library) performs offline sentiment classification. - The data is again routed through MQTT → Node-RED → InfluxDB → Grafana.
This modular architecture makes the system easy to deploy in both high-power and low-power environments.
Why It Matters
By passively capturing voice data and analyzing emotions in context, the Smart Museum offers a more nuanced understanding of:
- How visitors feel about individual artworks
- The emotional flow throughout the museum visit
- Real-time trends that help curators adjust layouts or exhibitions
This means museum staff can respond proactively — improving not just exhibition quality, but the emotional resonance of the entire experience.
A Glimpse Into the Dashboard
Our Grafana interface brings sentiment data to life. Curators and guests can visualize how emotions evolve over time, see which pieces spark the strongest reactions, and even monitor language-based differences in engagement.
Clean, color-coded lines trace the museum’s emotional atmosphere — offering insights that were previously invisible.
What’s Next?
The foundation is solid, but the future promises even more:
- Multilingual sentiment handling (simultaneous support for multiple languages)
- On-device inference with TensorFlow Lite Micro for full autonomy on Arduino boards
- Context-aware analysis, incorporating metadata like time of day, visitor path, or dwell time
- Scalable, plug-and-play deployment for other environments (retail, education, events)
Open-sourcing the platform would allow institutions to adopt and adapt the system to fit their unique needs.