I'm coming towards the end of my placement at SAIC Motor Technical Centre (SMTC) UK and I wanted to write about a project that I was involved with.
The company recently had a restructure with a new focus on innovation and the technology of tomorrow. In the spirit of this change, our department decided to build a "connected space" showcasing some of today's connected devices and integrating them into an area people can use for meetings and discussions.
We talk about tomorrow, but what can we do today?
This is the question we explored when we set about to build a “connected space” in our Longbridge office. The idea was simple: There are many off-the-shelf consumer devices on sale today that promise to make our lives more “connected”. How could we use some of these devices to provide a connected and integrated experience to the people on site?
“What is the purpose of such a project?” you might ask.
With our renewed focus on innovation it is beneficial for everyone to have an understanding of what is possible today. Having a clear understanding of where we are now makes it easier for us to visualise the sort of innovative technologies currently within reach, and which technologies might still be a fair distance away from being consumer ready.
“What is IoT?”
“IoT” is a buzz word that has been floating around recently, but what does it actually mean? The acronym stands for “Internet of Things”.
This is how Wikipedia describes it:
The Internet of things (IoT) is the inter-networking of physical devices, vehicles (also referred to as "connected devices" and "smart devices"), buildings, and other items—embedded with electronics, software, sensors, actuators, and network connectivity that enable these objects to collect and exchange data.
Put simply, it is the connection of many devices, or “things”, to the internet, allowing them to communicate with each other and share relevant information. This allows them to behave more intelligently as they can have a better understanding of the user and the environment from all the data they can gather from other devices. This project is an IoT project as we used devices that are connected to the internet and could share information with each other.
Putting it together
What do we want it to do?
Before we bought the different devices, we wanted to have a list of objectives the final solution should meet. Voice recognition was a must. The ability to control lights was also considered highly desirable as we knew this was a feature many people had already implemented fairly easily. It was also essential that this system can communicate with a car, as we wanted to demonstrate how connectivity could relate to a vehicle. We selected a few more features we thought the project should deliver on and then ordered the parts and devices we deemed necessary.
These are the devices we settled down to after finalising our desired features:
- Amazon Echo – voice recognition provided through the “Alexa” service
- Phillips Hue – Smart light control
- Flic buttons – Bluetooth buttons to allow the user to control features
- Samsung SmartThings – Acts as a central hub
- Samsung SmartThings outlet – Allows devices to be turned on and off by other devices
- Leap Motion – Allows the implementation of gesture control
- Netatmo Camera – Provides facial recognition
- LCD monitor – Used to build a “Smart Mirror”
- Windows powered tablet – To present an in-car interface to control devices in the connected space
- Wireless router & network switch – Allows us to network all the devices together
- Numerous Arduino microcontrollers and Raspberry Pi boards to allow us to implement our own functionality
“Works off-the-shelf” – a misleading phrase
During our initial research it started becoming clear to us that we weren’t going to be able to achieve the experience we desired by using the devices in their intended “plug-and-play” application. Once we started playing around with the devices we quickly realized that there were severe functionality limitations out of the box, and it would require a hefty dose of software engineering to get everything to play well with each other.
The cause of the problem was that the devices were not very open-ended. They were designed to be used in a certain way, and didn’t like it when we tried to get it to do things it wasn’t designed to do. For example, the Flic buttons were designed to connect to a phone and perform actions on the phone. We wanted it to unlock the car, or turn on the lights in the room. Another example of a limitation we had to overcome was Amazon Alexa. While it worked well with other devices designed for Alexa, it was another story getting it to interface with our car!
Tackling the limitations
The limitations we faced were vast and varied and they required a great deal of hardware and software engineering to overcome. Our biggest problem was getting devices that were never intended to be used together to talk to each other. We tackled this by using a few Arduinos and Raspberry Pis to act as the interface between these devices.
Getting Alexa to turn on the car lights
This was quite a difficult feat as Alexa was never designed to be used like this with a car, and our car was never designed to be interfaced with the external world.
First problem we tackled was figuring out how to turn the car lights on and off without using the stalks. We achieved this by accessing the car’s diagnostic through the OBD port. We didn’t want to use our normal OBD tools as they are designed for diagnosis and not to interface with other devices. We therefore used an Arduino with a CAN interface to hook-up to the OBD port. The Arduino was programmed to send specific OBD requests that could turn the lights on and off.
The next problem was getting Alexa to control the car lights. Alexa can control devices that were made to be compatible with it, but our DIY method was far from what Alexa was designed for. We tackled this by programming our Raspberry Pi to “pretend” to be an Alexa compatible light. Alexa could then send a “turn on” or “turn off” request to our board based on what the user wants. These requests were then relayed to the Arduino placed in the car which could then send the appropriate CAN message to toggle the lights.
Getting Alexa to unlock the car
We had similar problems interfacing with the locking mechanism of the car. To simplify the process, we decided to use the actual key to trigger the locks. The key was opened up and the internal circuit was connected to our Raspberry Pi. As before the board was programmed to behave like e an Alexa compatible light. When Alexa sent a “turn on” request, it would signal the user wants to lock the car, and when it sent a “turn off” request it would signal the user wants to unlock the car. The Raspberry Pi would compute the request and then send the appropriate signal to the key fob circuit to toggle the lock.
Getting the Flic buttons to work
To allow the Flic buttons to interface with our Raspberry Pi board, we used a Bluetooth interface. The board was programmed to pretend to be a mobile device and the Flic buttons connected to it and transmitted their press status. Our Raspberry Pi then processed the presses to determine which action to execute, such as toggling the lights, or toggling the car locks.
Building our own features
There were a few features we wanted to implement that weren’t available as off-the-shelf solutions. These were a “smart mirror” which could display useful information to a user while still being able to see their reflection, and a gesture control system to control different features.
The smart mirror was built using a vertically oriented LCD display. A Raspberry Pi was used to generate the graphics that were displayed. A sheet of one-way glass was laid over the screen to act as the mirror. Dark areas of the screen would appear as a normal mirror, while the bright areas will be visible through the glass.
We bought the Leap Motion to allow us to provide gesture control. The Leap Motion device can track the motion of hands and can be used to capture the different hand gestures a user may use. It was connected to a PC which did the processing of the raw data to get the position and motion of the hand. We then wrote software to process these hand motions, and determine which gestures the user was making. Our program then used the processed information to send control signals to the Smart Mirror and the Phillips Lights. Using gestures, the user could dim and brighten the lights, or toggle the mirror to show all or no information.
Putting it all together, we ended up having a tightly woven network of intercommunicating devices. There were many different interfaces and communication protocols. On the programming side, we had to use Python, Java, C++, Node.js, HTML and PHP to get the whole set up to work. We had to set up an IFTTT (If This Then That) account to manage some of the conditional processing of some of the devices and we also set up an OpenHAB server to manage the lights and provide a user interface in the car to allow control over the lights.
The final topology looked like this:
So what does it do?
The Amazon Echo can be used to make voice queries. This includes controlling the lights, locking/unlocking the car, toggling the car lights, turning the TV on/off as well as various other voice requests such as playing music or getting news headlines. The Echo was connected to a Bluetooth speaker to provide better audio quality.
The flic buttons could be used to control the different lamps in the room as well as to toggle the car lock. The Smart Mirror can show current weather along with other relevant information. The gesture control system can be used to toggle the Smart Mirror as well as control the brightness of the desk lamp. The tablet placed in the car can also be used to control the lights within the connected space.
What do we think about the project?
Overall it was an interesting and challenging project to work on, and gave us an understanding of what current day technology can achieve. We also learned how limited current off-the-shelf solutions are and some functionality is still some time away from being completely consumer ready.