Enabling Endpoint AI

We build AI-enabled low power sensor devices and cloud-based analytics tools to help monitor & improve the environment around us.

Contact Us
Open Platform. Easy to Integrate.

Embedded AI hardware & sensors built around an open platform.

At EdgeMachines we develop cutting edge end-to-end IoT & edge computing systems for the modern world using open tools, protocols and standards. Our hardware and software platform allows us to deploy low-power devices in the real-world without the need for expensive, high-bandwidth infrastructure. Using our AI models and FPGA & ASIC Accelerators we can create advanced IoT systems that gather complex data from the real world. From harsh industrial environments to remote national parks find our how our AIoT Platform can monitor your world.

We focus on developing products & projects that use open APIs, putting data ownership in the hands of our clients, reducing installation & processing costs, increasing ease of integration and reducing lock-in risks.

Advanced cameras combined with broad-spectrum microphones, onboard AI acceleration, and calibrated environmental sensors make the EdgeMachines' IoT platform uniquely capable.

Optional depth vision
Stereo Vision for applications where distance matters
Hear everything
With in-built microphones, detect events cameras can’t see
Embedded AI Acceleration
Optimised AI models that run on FPGA or ASIC AI Accelerators

Exceptionally Power Efficient

High TOPS per Watt
Low Power Optimised Algorithms
Optional Solar Power & Battery for 24/7 Operation
NVIDIA Jetson Nano - 10 Watts
0%
RaspberryPi 4b - 6 Watts
0%
Google Coral Dev Board - 5 Watts
0%
EdgeMachines - 2-3.5 Watts*
0%
Kendryte K210 + ESP32 - ~1 Watt
0%

Lower is better
* up to 3.5W when using spatial AI

Detect Impulse Sounds

Sound sensing for a holistic world view

Beyond simply monitoring noise levels in decibels, sound sensing uses our embedded AI hardware to continually process & classify sounds on the device. Because the device does all the processing, no audio data is ever recorded or transmitted off the device.

Impulse Sounds
Embedded AI models are always listening allowing detection of short but important sound events such as a car crash or gun shot. Create safer places and enable local authorities to be proactive with real-time notifications of compliance breaches.
Conservation & Protection
Our sound AI models are capable of classifying human-made noise into over 16 categories. From construction & traffic noise disturbing local residents to chainsaws being used in National Parks, sound provides a wealth of new information.

Classify environmental sounds to detect traffic hooning cars chainsaws gun shots crashes barking dogs yelling sirens construction fighting wildlife

8:02AM Traffic Noise
9:18AM Construction Noise
1:22PM Sudden Crash Possible car accident
3:51PM Dog Barking
5:10PM Traffic Noise
6:35PM Construction Noise Compliance breach
10:55PM Loud Music
11:20PM Hooning Vehicles Notify Local Authorities
01:14AM Screaming Notify local authorities
Spatial AI. Model Distance.

When every moment counts, AI-powered depth perception gives important context

Real-time identification of hazards in the workplace and home settings can be performed on device. Stero Vision allows our sensors to understand depth within a scene.

Stero Vision
Embedded spatial AI allows for advanced sensing applications without the need for expensive hardware (e.g. LIDAR). Accurately estimate object size & distance between objects.
Human-Machine Safety
Detect dangerous interactions between humans and other objects, monitor social distancing or identify heavy vehicles.

+0

Communication Protocols & Data Integrations work with EdgeMachines, allowing you to build on your existing IoT investment

More integrations can be added as needed for your use case
LoRaWAN
WiFi
The Things Network (TTN)
MQTT
NB-IoT
LTE-M
OpenBalena
GCP IoT Core
Speak with a Solution Engineer

Privacy built in

We are now entering the age of edge computing, with devices becoming more intelligent and users become more privacy conscious, edge computing allows us to deploy cloud-processing-free sensor solutions. Performing computation & inference on the device at the edge, means devices require less. Our sensors are design to operate over low bandwidth, low power communication networks (LP-WANs) and are faster to deploy.

Privacy first design
By performing data analysis & machine learning on the device itself, video, motion and sound data never needs to leave the device, its never broadcast over the internet or processed in the cloud. Our sensors tell you whats happening without having to show you.
Open protocols
Our communication protcols and APIs are open to use, meaning you can quickly integrate with the sensor directly and because the device does the analytics you don’t even need to use our cloud service.