How do we work in consensual and respectful ways with texts by marginalized authors that are not as well-represented, and by virtue of that fact alone, much more likely to be misrepresented, misappropriated, or misunderstood if we are not careful?
How do we really do ethics with machine learning?
Upon reading the readings, and the reading questions, I felt that these two questions go fairly hand in hand in terms of finding ways to understand the issues discussed in the articles.
In the interview with Emily Martinez, I was struck by the thoughtfulness and creative nature of Martinez's exploration into building chatbots with AI. It was nice to see someone explore the development of an AI chatbot without the intention of pursuing the monolithic image of a machine that talks. In my experience, it seems like most mainstream depictions make it seem like the only goal of working with AI is either to create the ultimate "intelligent" being, or for it to complete mundane tasks that prove too large in scale or tedious for humans–also with the goal in mind of maximizing profits for whoever harnesses it. However, that has never really made sense to me, seeing as you can't measure how "human" anything is, nor the intelligence.
Reading about Martinez's approach to understanding things like healing, trauma, addiction, etc. made me realize that the development of this type of AI is really just a reflection of the core philosophies of the creator. For example, when Martinez talked about teaching the bots to "'heal' — to be trauma-informed and understand healthy attachment so they can express love and kindness without generating the addictive and repetitive patterns driven by fear and anxiety that are encoded not only at the semantic level, but into the structure of so many narratives themselves", and decoding context, I thought about how a lot of life is just trying to find a balance between two extremes. In art, context is and is not important. Helping others is important, but so is helping yourself. Something I've discovered is that maybe one of the key factors of AI that we could use more of is the ability to understand contradictions.
Last week, I was able to complete the charRNN example using some Apple "terms of service" document that I found online, but the results were basically "flankzaAAAc '12imscm UI9wjxJlllll", and I wasn't able to redo it for documentation, so I decided to test out the p5 Markov example with the same txt file. This produced much more coherent outputs in no time at all, although I'm sure that is an indication that it can only reach a certain level of complexity–it was cool to see for myself how different methods yield different results and therefore can help me have a better understanding of when to use which tools.
Documentation: