Bonseyes iOS App
Bonseyes App demonstrates the use of Bonseyes Software Developer Kit (SDK). Bonseyes SDK enables developers to enhance their apps and digital experiences with artificial intelligence applications, such as: face detection, real-time analysis of human behavior, and scene segmentation.
The app creates digital experiences with enhanced levels of safety, and personalisation through real-time analysis of human behaviour and in live camera. Users can switch between the front and rear-facing cameras and also analyse pictures and videos stored on their devices.
Colaboration
Olivera Miletić
Duration
3 Months
Role
UX/UI Designer
Responsibilities
Design, Prototype
Client
Darwin Digital
Application Flow
Version 1.0:
  • Core Features: Semantic Segmentation, Face Detection, Face Verification, and Emotion Recognition, which represent essential AI functionalities.
  • Basic Support Features: Includes About, Beta Signup, Terms of Service, Privacy Policy, and Settings, providing foundational informational and legal sections.

Version 2.0:
  • New AI Features: Scene Classification, Multiple Object Detection, Keyword Spotting, and Body Pose Detection are added, enhancing the app's AI capabilities and making it more versatile.
  • Consistency in Structure: The user flow remains organized, with the Dashboard as the central hub, ensuring a seamless navigation experience.

Version 3.0:
  • Expanded AI Tools: Demographic Detection is introduced, adding more specialized capabilities.
  • Scalability: The user flow accommodates a growing set of features without sacrificing structure, indicating that the app can continue to expand while maintaining usability.
Overview
The app employs a clean, minimalistic style with a dark theme, typical of tech and AI-focused applications. The use of contrasting colors (purple, cyan, and red) not only enhances readability but also provides a visually engaging experience that feels sophisticated and futuristic.
Main Navigation
To use, start the app and select the application you wish to launch:

  • Face Detection, to detect and count human faces in the wild, using both front and rear-facing camera.

  • Face Recognition, to register your face and then and identify it once it is in front of your iPhone's front or rear-facing camera.

  • Emotion Recognition, to detect yours and other people's emotions in real-time, based on facial expressions. The graph at the bottom of the screen keeps a record of emotions over time.

  • Semantic segmentation, to label each pixel of an image with the corresponding object category. You can choose between fast and accurate mode.
Face Recognition
The interface displays the Face Recognition feature, showing real-time identification with a labeled bounding box around the detected face. Key elements include a recording timer, a button to remove identity, and options to capture photos or video. Once saved, a confirmation pop-up allows users to share the video directly, enhancing usability and quick access to saved media. The design is clean, intuitive, and keeps essential controls easily accessible.
Emotion Recognition

In designing this emotion recognition app, our goal was to create an intuitive, focused experience that provides users with real-time emotional insights. The live camera feed and bounding box highlight the area being analyzed, while the emotion label (e.g., "Surprise") gives immediate feedback on detected emotions. We included a line graph to visualize emotional changes over time, using color coding to differentiate emotions for easy interpretation. The minimal controls keep the interface uncluttered, directing the user's attention to the analysis itself. Displaying the frame rate reinforces the app's real-time performance, ensuring users feel connected to the process.

Object and body pose detection
In this design, our goal was to create a user-friendly object and body pose detection interface that is clear, intuitive, and visually informative.

  • Object Detection: Each detected object is outlined with a bounding box labeled with its category (e.g., "Cat," "Chair," "Bus"). The color-coded boxes make it easy to distinguish between different objects. We aimed to keep the layout clean, so users can immediately see and understand what is being detected in the image without distractions. The frame rate display ensures users are aware of real-time performance.

  • Body Pose Detection: For body pose detection, we used colored lines and points to illustrate the detected body parts, helping users visualize the connections and movements in a clear, structured way. The latency and frame rate indicators reassure users of the app's responsiveness and accuracy.


Semantic Segmentation and Face Detection
For this design, we focused on making semantic segmentation and face detection clear and accessible.

  • Semantic Segmentation: We used vibrant, contrasting colors to distinguish different objects and areas, such as sidewalks, vehicles, and people, making it easy for users to interpret. The category legend on the right aids in quickly identifying each color, while a "Fast" and "Accurate" toggle lets users adjust processing speed as needed.

  • Face Detection: For face detection, we added labeled bounding boxes around each detected face. The app shows the count of faces and frame rate, giving users instant feedback on the detection’s accuracy and real-time performance. Consistency in color and minimal distractions keep the interface user-friendly and focused on results.
digitalnadesigns@gmail.com
currently Belgrade based