UrbanSynth: AI-Powered City Design




Public Virtual Forum where people test and shape how the physical cities is developing. Let’s look into Public Virtual Forum where people test and shape how the physical cities is developing. People connect into Augmented Reality and FPV Flying also known as first-person point of view, it gives the ability for people to see from a particular visual perspective other than one's actual location. Here people’s expressed idea instantly turns into a virtual element that interacts with what already exists. In that way combining virtual architectural design ideas with the physical world. In Public Virtual Forum people test out ideas for transforming the physical city in which they live.AI architecture agent deals intelligently with particular complex tasks and essentially uses the deep learning algorithms to solve complicated physical-world problems. Just to mention a few of the things an AI architecture agent does:AI agent understands how people use spaces with the same or a similar program. AI agent  also understands the context and behaviour of people in relation to the built environment. AI agent can then integrate and translate people’s suggestions into the context, to accurately deliver what people need and desire from their physical surroundings. 
Architectural researchers, anthropologists, and many other architecture related professionals work to improve AI architecture agent. They research on very particular and specific aspects that deep learning algorithms are still not able to learn by themselves.AI architecture agent is based on real time data.
There are 3 different types of channels. Blue, yellow and red.


Each project has a set timeframe within which the final result must be achieved. It ranges from 1 to several thousand hours. 
People vote on the proposals generated by AI architecture agent
Places marked in red are where people are connected to the personal channel.People work mostly with their private buildings and the competitions for the public areas that they occupy. Everyone has access to each others personal channels and can vote on proposals regarding public areas. If a high level of consensus is achieved then the proposals are transformed into reality. 
It does seem like the virtual design world will move more into haptic space, as manual input devices (mouse, keyboard, etc) have not changed that radically since the invention of personal computing. It also made me think of some questions around what the role of AI might be in the process you visualise: how would an architect input parameters, or the design brief, and what might the outputs be? Perhaps a future architect would select appropriate designs through 'swiping' processes, or a form of digital sculpting, or something completely other? What kind of architecture might this generate? How would these technologies deal with context, such as localised climatic conditions, or cultural information, and what opportunities might this create?
Places marked in yellow are where the AI agent generates several different architectural solutions for each requested project. Here, people or client vote to determine which proposal should be implemented.
Places marked in blue are where people are connected to the same channel along with the AI architecture agent working in that particular location. 
The example of common Public Virtual Forum where people can leave comments on the walls, while AI architecture agent is presenting the problematic of the subject.
Avatars in Public virtual Forum where people meet to shape their physical environment