Extended Intelligence II

The first session of the Extended Intelligence II workshop began with presentation from Chris and Pietro on how AI and LLM models are impacting design processes today. Then they show us some of their recently developed projects, such as the DOTTOD gallery of cybernetic interpretations. I think it's interesting how they used this AI model, to generate from iconic types of furniture descriptions, a generated image of it, and then fabricate it and display it in a virtual gallery and on-site. It was amazing to see how the AI ​​model could be an inspirational tool and propose suggestive speculative results.

Then, we experimented with Ai camera Dottod, a tool that uses prompt-edited snapshots and generates images. The result was in most cases imprecise and depended mainly on the complexity and precision of the suggestions provided (prompt) . Working with the AI ​​model requires knowing how to best communicate with it to receive the intended output.

Original snapshot
Generated image

imagine a future where graffiti are multi-dimensional portal to different reality in a utopian reality and in particular use the character of the image as element integrated in the portal

Something that I realized, is how AI model addresses content with Bias, manifesting the importance of datasets. We had a discussion with Chris and Pietro about it, and after I realized that creating local small datasets can be something to start building a non Western-Centre future in terms of reducing bias in the models.

Original snapshot
Generated image

imagine the person poking out the door as flying out of it as if he's being thrown out like trash and the two people on either side of the door appear as walls instead of normal people.

In the second session, we explored the principles of neural networks and diffusion models, achieving a structural understanding of how large language models process data. We examined input, hidden, and output layers using a housing price prediction model as a practical example.

After this theoretical part of the lesson we started with the hands-on session, we worked with Modmatrix, a tool for project prototyping inspired by synthesizer modulation matrices. By inputting an image of my previous Micro_Challenge artifact, I experimented ways to show how it can iterate in the community of practice in which I'm willing to be involved.

Input image
Output image

The image shows a wooden structure, likely a small model or prototype of a piece of furniture or display unit. It is made from light-colored wood and features an angular, geometric design with various slats and compartments. In the background, there's a whiteboard with sticky notes and what appears to be a smartphone resting on the board. There are also some papers and what seems to be a laptop in the scene.

In the final session of Extended Intelligence II with Mohit, Carlos, and Auxence, we worked on the project idea focused on replicating swarm behavior but instead, between two machines. The device system was supposed to be an ESP32 equipped with a speaker and a microphone. The first device would emit a sound at a certain pitch, while the other would produce a different pitch.

The intention was to synchronize both devices so that, over defined times, they would adjust and eventually play the same pitch sound. To achieve the synchronization, we aimed to use an Open AI API model that would help the devices gradually align with each other.

During the development of the project, the challenge was more than one, the first step was to test singularly all the components like speakers, microphones, and connect it to the ESP-32 Barduino. We managed to test all the components but we didn't manage to compose the device including the microphone and speaker in the same device because the challenge was to create a device that could hear and speak at the same time. Considering the time to deliver it, we had to adjust the idea and try to synchronize the device through different components like Leds. In the end as the last part of the seminar, we managed to communicate to the Open Ai API to create a connection between the Barduino and the API. The seminar has been interesting for me and I'm willing to continue to develop the same idea and make it work properly. In other words, I discovered that electronics and AI have a lot of potential for future projects but, at the same time require practice!?!

Presentation Extended Intelligence II

Last updated