TrashScan is a revolutionary new way of waste management and sorting. With it, we can more easily sort trash into the correct bins: compost, bottles & cans, mixed paper, and landfill. Simply place your trash on top of the platform and TrashScan will let you know which bin to place it in. This technology not only makes it easier to sort trash, but it works towards educating users on the intricacies of recycling and composting in order help communities achieve zero waste by 2020. I've only included the highlights of the project here, but a more comprehensive write-up can be found on Hackster.

TrashScan has been featured as a Project Spotlight for the Jacobs Institute for Design Innovation. Our team was selected to have the honor of presenting our prototype to the Jacobs Institute Advisory Board.


Problem

Students do not know what bin their trash belongs in. People don't care about what bin their trash belongs in. Cross-contamination in bins prevents composting.

Role

  • Led design efforts from user research and conceptualization to functional prototype
  • Conducted ethnographic interviews and regularly met with interviewees
  • Created the animations in Processing
  • Sketched concepts and developed user's interaction with the platform

  • Initial Research

    We interviewed Monica and Nicole, two students coordinators from Cal Recycling. From our first interview with Monica, we learned about the inefficiencies of sorting trash during waste audits. These issues originate from human errors of cross-contamination between bins. It is difficult to teach 40,000 students to correct their habits and take the time to learn about disposing in the correct bin.

    In a follow up interview, we learned that many of the common errors were:

  • dry paper should go in mixed paper (no soiled/wet paper)
  • plastics were found in compost, but they don’t belong in the compost bin (should go to landfill or bottles & cans)
  • The non-compostable spoons from off campus were placed in compost

  • Design Process

    Our first design included a box with lights positioned at the top. A camera would look into the box, and once the item was sorted, a light would indicate the correct bin. However, this initial design was too constricting and bulky, so we simplified out design to a flat platform. The user would place their trash onto the platform and a light attached to the right side of the platform would light up, much like the initial design. In our third design, we wanted to be able to add more to the device and have some sort of imagery to be displayed on the platform. Thus, we added in a projector next to the camera and removed the lights. The device would have a flat platform on which an image would be displayed. We would customize the image per bin.

    For our final design, we used a monitor, which conveniently served both as the platform, as well as where the visuals and sound played from. We place this monitor inside a box. Users would see an initial start screen, with simple instructions guiding them on how to use TrashScan.


    Because signage is often hidden and ineffective for telling people what belongs in each bin, we wanted to display animations and play sounds to engage the user and associate each bin with a sound and graphic. We included facts stated during the interviews as part of the messaging. For compost, we played a victorious sound to encourage composting, and an error “wah-wah” noise to discourage people from using the landfill.


    Final Prototype

    We made animations with Processing.py, then converted it to video so it could be played on Raspberry Pi. After an object is identified and sorted into its corresponding bin, the monitor will play the animation for its bin. We included sound to help the user remember which bin the object belongs to. For the Compost bin, we play a victorious sound to encourage composting; for Landfill we play a sad sound to discourage putting waste in the landfill. Fun facts on the screen encourages education as part of the user's interaction with the platform.

    Greeting Screen

    Compost Sort

    Bottles & Cans Sort

    Mixed Paper Sort

    Landfill Sort


    Final Thoughts

    Challenges

  • The object detection was sensitive to the lighting of the room so it had to be readjusted whenever we presented it in a different setting.
  • We faced technical limitations from the Raspberry Pi in playing the Processing graphics directly using our Python code, but we were able to overcome this problem by rendering the motion animation as a video.
  • For Future Iterations

  • Companion application for mobile
  • The ability to detect multiple objects (e.g. a tray of plate and food)
  • Accounting for edge cases (plastic vs. compostable spoon)

  • Credit

  • Machine Learning: Jessie Salas
  • Ethnographic Interviews: Rachel Lin
  • Animations & Frontend: Rachel Lin, Chonyi Lama
  • Object Recognition: Lesley Chiang
  • Box Construction: Drake Myers