Describe ITP/IMA
Live: https://describing-ima-itp.herokuapp.com/
Presentation: https://docs.google.com/presentation/d/1GhUT2kSZhzTlgopsoTtAxy-SX85jpwbBiKyxpshcYrY/edit?usp=sharing
Process Sketch: https://editor.p5js.org/bethfileti/sketches/QiKP4n7i7
The Concept
Trying to explain the ITP/IMA program to friends and family is not an easy task. When speaking with some fellow students, we realized that it would be great to have a persistent solution. What if, rather than struggling through definitions, we could just share a website that explains it for us?
Ideally, this would also create an opportunity to show a bit of the ITP/IMA approach, rather than just telling about it. With this, my original thought was to develop some type of machine-learning functionality based upon a database of pre-existing definitions, collected from students, alumni, faculty, and friends of the program. Maybe it could auto-generate a new definition, based upon the existing ones? Or maybe it could compare the sentiment of the definitions and sort the definitions from most echoed to outliers? Maybe re-write the existing reviews but shift the tone to match
Exploration
I spent way too much time struggling to explore the machine-learning angle here, without gaining any type of foothold. Here is some documentation about that unproductivity:
Things that tripped me up:
- Python virtual environments
- Downgrading versions of python and tensorflow to recreate tutorials from 2018
- When/why to use/learn something new like “Spell”
- When is it worth it to introduce a different workflow? How to evaluate the cost/time risk of trying to integrate a new tool, when you are in a learning/exploration phase and up against a tight deadline for delivery
- Training a Neural Network was a really good exploration, but I realized that what I was starting with would never work. Trying the train a neural network requires clear categorical definitions, which I did not have. (Using a different dataset in this sketch, because I had a few datasets of varying sizes available for testing). https://editor.p5js.org/bethfileti/sketches/m8UUloHPQ
Resources for future learning:
- https://www.youtube.com/watch?v=EnblyAdZG8U Convolutional Neural Networks with ml5.js
- http://colah.github.io/posts/2015-08-Understanding-LSTMs/
- http://karpathy.github.io/2015/05/21/rnn-effectiveness/
- https://stackabuse.com/text-generation-with-python-and-tensorflow-keras/
- https://nabilhassein.github.io/about/
Finally, when exploring some of the documentation for NaNoGenMo, I wiggled my way to this csv of synonyms and antonyms from verachell. This provided me a great opportunity because it allowed me to explore language, using a technical toolset which I was comfortable in. I also really liked the concept of exploring an idea by looking at it from an opposite positioning.
Using the csv, I wrote a script to convert the data into a json format, which was easier for iterating through.
From there, I worked in the p5 editor on a sketch to explore possible ways of interacting with text, antonyms, and synonyms. My strategy was to use a canvas elements as the background for the website, and display synonyms for words from the various descriptions. I thought that by surrounding the descriptions with similar language, it would help users get a stronger overall sense of the tone of the program. Users could then interact with the synonyms to get an overall sense of what the program isn’t.
To support this from a technical standpoint, I needed a way for the sketch to feel like it was being “erased” or had a flashlight going over the top of it, changing it to a different image. This was a nice little technical challenge. I spent a fair amount of time trying to achieve the effect through vector masks and blendModes, but I wasn’t able to get the results that I wanted to achieve with the flexibility that I wanted to have over changing the presentation of the synonym/antonym layers.
I ended up using the sketch to create two p5 images and drawing one image as the background, using get() to pull the image data from around the mouse cursor, then drawing the other image, before drawing the selection from around the mouse cursor.
Putting this into the context of the site itself resulted in a really slow experience, which did not handle responsiveness well. The JSON was being pulled in from the client side, and there were a number of timing issues that were mucking everything up. Aside from the technical issues, I found the UX to be a distraction away from reading the definitions. Rather than reading, I was playing with the background. Additionally, I wanted to make a connection between the synonym and the antonym that was being revealed. The way things were being drawn, the power of the experience was meant to be in the collection of the words, rather than the specific words. However, when I was a user interacting with the experience, I found the specific words and points of interaction to be where the engagement lay.
Design & Production
Taking this feedback into account, I did a redesign of the UX and re-wrote the structure of the code. First, I pulled the synonyms/antonyms in on the server-side of the code and then pushed it through from there. This resolved most of the timing issues. Second, I remembered a new word that I had learned from a recent crossword puzzle: coruscation, meaning a glitter or sparkle. This got me thinking about the idea of rainbow prism. I felt a nice visual correlation between the separation of light into an array of colors and the separation of a word into its synonyms and antonyms. I also looked to integrate the interactivity directly into the definitions themselves. If the purpose of the site is to present and encourage the reading of these definitions, then it was important to maintain the user’s focus on this area.
To create the prism effect, I explored the idea of using tooltips to display the alternative language on hover. However, I realized that this interfered with the reading experience, while being poor for accessibility. I opted instead to keep the effect reserved for hovering, and used css shadows for the visual experience.
With all of these reserved, the site came together enough to present it as a better reflective proof of concept. One thing that proved to be problematic for me, was using event triggers as aggressively as I was. Specifically, hovering over a specific class and then using the mouse click was running each event trigger too many times, an event trigger which was sometimes renaming the class in the process. This is still causing some issues and I suspect is what is behind the majority of the performance bugs and issues.
Next Steps
- Someone added a description which has since disappeared, so I need to investigate the persistance of the database on heroku.
- Ensure better communications across accessibility standards (don’t rely on color along to communicate)
- Check user flows of adding descriptions → ant/syn flagging
- Spam/Content monitoring
- Clean up interactions on mobile
- Speed up overall performance (Investigate those pesky event listeners!)
- Put it into action and get some user feedback from target audience