Plantplacer — when home is where your plants are

Mia Bekker
12 min readMay 9, 2021

An AI-powered app for placing and caring for your indoor plants.

“Where should I place my plant? Does my plant get enough sunlight? Or does it just need a little more shadow?”

These are often asked questions when we want to properly nurture our plants and add some greenery to our rooms. Plants have different needs and react to sunlight, water, and fertilizer in a variety of ways. But if you do not have a plant expert to tell you how to place and nurture your plants, how do you then figure out what your plants need and how you can nurture them? The existing market of plant apps is quite diverse and you will be able to find a variety of smart plant apps targeting different types of user needs.

Why do you need Plantplacer?

The existing market of plant apps is quite diverse and you will be able to find a variety of smart plant apps targeting different types of user needs. Planta is a great all-around plant app for all your plant problems — but only if you have the premium upgrade. You can log the species of plants you have at home, get push notifications about when you should water them based on the weather in your area, and receive detailed instructions about different watering methods. On the other hand, Florish is a completely free app that gives a brief description of what your plants should look like when they’re healthy, care tips about water and light preferences, and a list of common issues that cause them to display signs of sickness (Chu, 2021). These are just a few examples of the existing plant apps.

Although there are some really smart and popular plant apps already, we have discovered that the AI in these apps is not as powerful as desired. We see this as an opportunity to implement an app with a little more advanced AI technology that collects the core functions of the existing plant apps combined with a nicely working AI. Therefore, we present the plant app Plantplacer, which is a fusion of existing plant apps and interior decorating apps and which targets the following identified user needs:

  • Learn about the inflow of light in your house and the decor of your rooms in order to figure out where to place the plant, as it is usually quite hard to figure out where to place it for optimal growth.
  • Help with imagining and identifying where to place certain types of plants incorporated in your interior.
  • The app should remember the types of plants you have and like to recommend plants with the same type of need or look.

Our AI-driven app is mainly based on two types of machine learning systems; image classifications and recommendation engines. We use the image classification for the app to learn from pictures of different plants to tell the plant species apart. The recommendation engine system learns from pictures of the users own plants and pictures of their home, to recommend new plants that would thrive, based on the users plant preferences and the inflow of light in their home.

All of this is done via the camera in the users phone, which has a light meter to measure the inflow of light, and by taking pictures of plants and the places you wish to place your own plants or new ones. These machine learning systems are chosen to give the user the most personal AI-driven experience.

An example of a user interaction with the first edition prototype of Plantplacer.
An example of a user interaction with the first edition prototype of Plantplacer.

When moving somewhere new…

To better understand an AI-driven experience, we use the framework of Kliman-Silver (2020), which describes three dimensions of an experience: social/personal, discretionary/non-discretionary, and high independence/no independence. We use this preliminary categorization of the AI-driven experience so that we align our research methods to ensure that we apply the right user experience to our product.

From this framework we consider Plantplacer to be personal, discretionary, and high independent. We consider our product to be personal because it is intended for an individual’s phone and because it learns something about the individual’s home; the flow of light, their plants, and needs. Our product is discretionary because it is a temporary and task-based experience that starts when the user opt-in, and scans the room with their camera. Lastly, we consider our product to be highly independent because of the capability to suggest other plants, learn from your personal stock of plants, and hereby suggest plants of the same sort and needs.

We want the app to use machine learning to understand the individual user’s needs in terms of suggesting new plants that fit the flow of light in the room, suggesting where the users’ own plants will thrive, and learn what type of plants the owner enjoys the most.

Based on the user needs we defined earlier, we explored different scenarios where some of these needs occur and ended up focusing on the scenario where you move to a new home. Here you might have your own plants with you and want to explore where to put them in your new home. The app should be able to help you to place your plants based on the flow of light, so when holding the camera towards an area the app will guide you to select one or more plants from your own stock. In the same scenario, you might want to acquire new plants, and here the app should be able to tell you which plants would thrive in the area to which you point the camera. The app can learn from your existing plants and suggest new plants in the same category; type, watering needs, etc.

This insight we used further on when making the wireframes and selecting the method in which we tested the prototype on our users.

Live photos of the prototype for Plantplacer.
Live photos of the prototype for Plantplacer.

Prototyping Plantplacer

Our prototype has been made with a starting point in six design patterns for improving in-app navigation through UI adaptation (Yang et al., 2016). It includes a landing page, which immediately asks the user to scan the picture taken in order for the app to analyze the room and suggest options for what plant to place. This eliminates the need for a home page while still keeping the fundamental functions of the app.

Single selections are also included in a fully automated adaptation, where the best suitable plants, the predicted items, are suggested for that particular spot in the room with the “try one of these”-function. The app is also based on multi-screen transactions by automating orders in one screen in order to reduce navigation and confusion for the user. Last but not least the app is based on user-generated contents, photos, for the AI to function properly and handle sensor data.

The prototype is partly high-fidelity and low-fidelity depending on what areas you look at. It includes both automatic responses and has the visuals of a real app, which makes it more high-fidelity. Meanwhile, it lacks a lot of content and only has limited clickable links, which suggests a more low-fidelity prototype (Pernice, 2016).

When designing a product with advanced technology like Artificial Intelligence, testing at the beginning of the design process can be challenging. To better understand if our product gives the right experiences, we used the Wizard of Oz research method to explore how our users interacted with our product. The Wizard of Oz method can be used to explore everything from the interaction to the viability of the system and is especially effective when testing AI-driven experiences due to the variety and complexity of responses (Maulsby et al., 1993).

In practice, we used the Wizard of Oz method to test the AI of the app, even though it ended up being high-fidelity, clickable and independently functioning, and therefore does not necessarily afford a Wizard of Oz approach. The AI of the prototype was made with a pre-chosen picture, whereas the real software would allow the user to take their own picture. The scanning-pages are also static pictures put together for the look of an animation and the feel of a real scan of a room. In reality, the AI-experience of the prototype is non-existent, but using a Wizard of Oz-inspired approach by influencing the interface, made it possible to test and evaluate the AI-functions.

It could be argued that we could have used the Wizard of Oz method to test an even earlier version of the prototype, or even simulate the AI-functions ourselves (Pernice, 2016), but using parts of the method still helped us to explore different aspects of Plantplacer. Aspects such as the general idea of a plant app, if the suggestion of new plants feature was helpful, the navigation of the interface, and if the user could see themselves using this app in their everyday life. All of this is useful information early on in the design process before moving on to actually building the system.

Testing Plantplacer

We have tested our prototype with two separate users. Both participants were women aged between 23 and 26 and based in Copenhagen.

The user test was carried out by mirroring the prototype from the design tool, Figma Mirror, to the participant’s smartphone. Following a media go-along approach we familiarized the participants with the scenario and guided them through the prototype. The benefit of using the media go-along method is that you are able to interview the user and observe how they interact with the product at the same time (Møller & Robards, 2019). We used the media go-along to explore the users’ experience with the AI simulated suggestion feature, the camera feature when taking pictures of their apartment and placing plants, and their ease of usage. This insight started a new iteration of ideas and things to improve.

The participants were generally excited about the app and the functionality. Both participants proposed a price tag and a direct link to where they could buy the plants the app suggested.

One of the participants wondered whether the app actually was useful or not. She was not convinced that she would use an app like Plantplacer. She would probably just look at what the description label for the plant said or literally just give the different places in the home a try.

The second participant would use the app as she just moved into a new place where mapping the optimal spots for certain kinds of plants would be appreciated. She did stumble upon the navigation with the sliders. She proposed that small arrows could indicate that you can slide between the recommended plants for the scanned spot. In regards to further development, she would love a more systematic approach to the app. That included adding rooms and plants to specific rooms. Furthermore, she suggested a ‘watering planner’ feature incorporated within the app to make it more essential in the everyday life of living with plants.

Live testing of the first edition prototype.
Live testing of the first edition prototype.

What to do next?

Our result from the user testing shows us that an earlier test could have given us feedback about the overall concept. One of the participants failed to see how we were unique compared to the information on the pots, and this information could have made us define our product more detailed, thought of other ways to stand out and make the concept more clear in the wireframe. In our further process we would have to incorporate more context and to make sure that the nuances of the features were clear. Another result shows that users would like a ‘where to buy’-feature, which was an aspect we had not taken into account, and it would be a good and convenient feature to add. Overall they had some feedback about the layout which needed more guidance on how to interact and navigate with the app. This would surely be incorporated in the next prototype.

AI challenges
Image classification has evolved rapidly these past few years and the improvement has led to these systems being used in high-value applications. But there are still many challenges when it comes to image classification systems. When training a system you have a large dataset of training pictures, but there are a lot of real-world pictures that can potentially differ from these training pictures (Dai, Jifeng & Lin, Steve, 2018). Some of these challenges are that the point of view can be different, the object can be partially hidden behind other objects, clutter in the background or the lighting can illuminate the object differently, which all can lead to significant inaccuracy in classifications (Dai, Jifeng & Lin, Steve, 2018).

The system for recommendation engines is highly used for online shopping and streaming services where it tries to predict what the user might want based on their choices. This brings different challenges as the cold start; when a new user enters the system and the system has no data to predict the users preferences, change in data or lack of a large dataset, user preferences changes or latency; where new products are not yet rated (Khusro et al., 2016). Another technical challenge is the light-meter incorporated in the smartphones. As we tested some of the different plant apps, we experienced that the room often was deemed too dark, and that “no plant” was found. All of these limitations in the existing AI technologies make room for errors and inaccuracy, which makes an AI-driven product challenging to design, and it is important to incorporate thought on these challenges and work around these limitations to ensure a good user experience.

The theories we chose helped us define the characteristics of an AI-driven experience we wanted and helped us incorporate this experience into an early prototype. The use of the theories made it possible for us to map our idea, the wireframes and define the interface to fit our vision for the app. This also showed us that creating a prototype, which is seemingly high-fidelity, this early in the process can create hurdles when testing and evaluating the functions. Other than that, the theories created a foundation for further development and extended the parts of our concept, which may be inadequate. However, by defining Plantplacer as personal, discretionary and high-independent, we also eliminated the opportunities for the app to be more social, non-discretionary or low-independent, which could be considered in further research. Further user testing could suggest that the app needed more social aspects or be more non-discretionary.

In this process we learned that an AI-driven experience can be difficult to prototype. Because of the complexity of AI and machine learning systems, there are a variety of responses and interactions to account for, to simulate the experience and this can be challenging to prototype. We especially found the recommendation engine challenging to prototype, so it felt more personal and not just like a pop up, and for further work we would use more Wizard of Oz to test an earlier version of the prototype in order to convey the different responses and enhance the more personal experience in the prototype. As designers we have learned that an experience can be more than just the immediate interaction with an artefact or a technology. The AI-driven experience is based on the system inside the artefact, and is supposed to learn from the interaction and give back a large variety of responses, which makes it a complex experience to simulate.

Link to video below: https://youtu.be/UO9AEUFURCs

By Annika Kondo, Louise Scholes Lyck, Mads Christensen, Mia Bekker Andersen & Mikkel Rantzau (group 11) - IT-University of Copenhagen, UX Design I.

References

Chu, H. (2021). 7 apps to keep your plants alive and well. Mashable. Retrieved 27 April 2021, from https://mashable.com/article/best-apps-for-taking-care-of-plants/?europe=true.

Dai, Jifeng & Lin, Steve (2018). Image Recognition: Current Challenges and Emerging Opportunities. Microsoft Research-Lab Asia. Viewed 02.05.2021. https://www.microsoft.com/en-us/research/lab/microsoft-research-asia/articles/image-recognition-current-challenges-and-emerging-opportunities/

Kliman-Silver, C., Siy, O., Awadalla, K., Lentz, A., Convertino, G., & Churchill, E. (2020). Adapting user experience research methods for AI-driven experiences. Conference on Human Factors in Computing Systems — Proceedings, 1–8.

Maulsby, D., Greenberg, S., & Mander, R. (1993). Prototyping an intelligent agent through Wizard of Oz. Conference on Human Factors in Computing Systems — Proceedings, 277–284.

Møller, K. & Robards, B. (2019). Walking through, going along, and scrolling back: Ephemeral mobilities in digital ethnography. Nordicom Review, 40 (Special Issue 1): 95–109. doi:10.2478/ nor-2019–0016.

Pernice, Kara (2016). UX Prototypes: Low Fidelity vs. High Fidelity https://www.nngroup.com/articles/ux-prototype-hi-lo-fidelity.

Yang, Q., Zimmerman, J., Steinfeld, A., & Tomasic, A. (2016). Planning adaptive mobile experiences when wireframing. DIS 2016.

Khusro, Shah, Ali, Zafar and Ullah, Irfan (2016). Recommender Systems: Issues, Challenges, and Research Opportunities. Business Media Singapore. Information Science and Applications (ICISA) 2016.

--

--