Design

The Tulip to be London’s 2nd Tallest Building


London’s city’s planning and transportation committee has just approved The Tulip. The 305-meter observation tower is to be constructed on 20 Bury Street, adjacent to 30 St Mary Axe, informally known as the Gherkin. The Tulip is designed by Foster + Partners, a British international studio for architecture and integrated design which is known for many high-profile glass-and-steel buildings. The tower will be a visitor attraction without any office space and will be London’s second tallest building. Work could start as early as 2020 with a scheduled completion date of 2025.

 

A survey completed by Londoners in December 2018 suggested that two-thirds believed the tower would be “an attractive addition to the London skyline,” but the tower has been opposed by Historic England, Historic Royal Palaces, and the Greater London Authority. Following the approval, chief executive of Historic England, Duncan Wilson, said he was disappointed with the decision and that the construction “would damage the very thing its developers claim they will deliver – tourism and views of London’s extraordinary heritage.” London airport officials also opposed the decision, claiming that the tower would be an obstruction for the air traffic control.

 

source: wikipedia

Rays of Light – Shaping Objects

 

UC Berkeley researchers have discovered a new light-based 3D printing technique that has the potential to transform how products from prosthetics to eyeglass lenses are designed and manufactured. Using light, the 3D printer transforms liquids into complex solid objects in only a matter of minutes. Nicknamed the “replicator” by the inventors — after the Star Trek device that can materialize any object on demand — the 3D printer can create objects that are smoother, more flexible and more complex than what is possible with traditional 3D printers. It can also encase an already existing object with new materials — for instance, adding a handle to a metal screwdriver shaft — which current printers struggle to do.

 

Most 3D printers, including other light-based techniques, build up 3D objects layer by layer.  This leads to a “stair-step” effect along the edges. They also have difficulties creating flexible objects because bendable materials could deform during the printing process, and supports are required to print objects of certain shapes, like arches. The new printer relies on a viscous liquid that reacts to form a solid when exposed to a certain threshold of light. Projecting carefully crafted patterns of light — essentially “movies” — onto a rotating cylinder of liquid solidifies the desired shape “all at once.”

 

The new printer was inspired by the computed tomography (CT) scans that can help doctors locate tumors and fractures within the body. CT scans project X-rays or other types of electromagnetic radiation into the body from all different angles. Analyzing the patterns of transmitted energy reveals the geometry of the object.

 

Besides patterning the light, which requires complex calculations to get the exact shapes and intensities right, the other major challenge faced by the researchers was how to formulate a material that stays liquid when exposed to a little bit of light, but reacts to form a solid when exposed to a lot of light.

 

The 3D printing resin is composed of liquid polymers mixed with photosensitive molecules and dissolved oxygen. Light activates the photosensitive compound which depletes the oxygen. Only in those 3D regions where all the oxygen has been used up do the polymers form the “cross-links” that transform the resin from a liquid to a solid. The objects also don’t have to be transparent. The researchers printed objects that appear to be opaque using a dye that transmits light at the curing wavelength but absorbs most other wavelengths.

 

This work was supported by UC Berkeley faculty startup funds and by Laboratory-Directed Research and Development funds from Lawrence Livermore National Laboratory. The team has filed a patent application on the technique.

 

 

 

source: berkeley

Dynamic City Streets of the Future

 

Imagine a city street, nestled between buildings with mostly foot and bicycle traffic. During the morning and evening hours, there might be a steady stream of commuters heading to work. In the middle of the day and the evening, families might use the street as a play space. And on the weekend, the street could be cleared for a block party or a basketball game.

 

Alphabet’s Sidewalk Labs in Toronto along with the International design and innovation office CRA-Carlo Ratti Associati has worked to design The Dynamic Street, a prototype of a modular and reconfigurable paving system that hints at the possibility of the streetscape seamlessly adapting to people’s needs. The project is based on Sidewalk Labs’ extensive experience and research into street design, and affords visitors the ability to engage with up-and-coming technology concepts.

 

The Dynamic Street features a series of hexagonal modular pavers which can be picked up and replaced within hours or even minutes in order to swiftly change the function of the road without creating disruptions on the street.  This system is inspired by French research group IFSTTAR’s pilot project on removable urban pavement underway in Nantes.

 

The project explores the different patterns that can be created on the hexagonal grid as well as the integration of lights into individual pavers. Each paver can also potentially host a plug and play element – that is, vertical structures such as poles, bollards or even basketball hoops.

 

“The Dynamic Street creates a space for urban experimentation: with this project, we aim to create a streetscape that responds to citizens’ ever-changing needs,” says Carlo Ratti, founder of CRA practice and Director of the Senseable City Lab at the Massachusetts Institute of Technology (MIT): “As autonomous vehicles are likely to start running on streets soon, we can start to imagine a more adaptable road infrastructure.”

 

 

 

source: carloratti

Student-Developed Moon Base

 

Angelus Chrysovalantis Alfatzis, an architectural engineering student at the National Technical University of Athens, Greece has gained attention for his idea for a moon base. He is one of several young researchers based at European Space Agency’s (ESA) astronaut center in Cologne, Germany, investigating Moon-related concepts as Europe prepares for future missions.

 

Alfatzis describes his architectural approach as “hyperlocal” and is drawn to extreme environments in remote places and believes that sourcing or producing materials on the Moon itself will be vital to building a sustainable lunar habitat – a view that ESA shares. “I always strive to find material and structural solutions in accordance with the resources available on-site,” he explains. “At the moment, my focus is on using unprocessed lunar soil for construction and the architectural applications of this. Our idea is to transport inflatable modules to the base of a small crater on the South Polar Region of the Moon, and then gradually fill the cavity with lunar soil, until the modules are effectively buried. Meters of shielding will protect those inside from radiation. Building inside a crater will also help insulate due to the stable temperature of the Moon’s underground environment and provides cover from the threat of micrometeoroids.”

 

Like construction on Earth, Angelus says the main purpose of lunar buildings will be to protect inhabitants from external conditions that could otherwise pose harm, and create a habitat that supports human life. But there are special considerations that must be factored into planning. Sunlight, changes in temperature, the type of terrain and the level of gravity all play a role in designing a suitable concept. Due to lack of a protective atmosphere or magnetic field, any lunar base must also protect its inhabitants from radiation and tiny meteorites that rain down overhead.

source: esa

A Walking Car

 

Hyundai wowed the 2019 Consumer Electronics Show last week in Las Vegas with their new vehicle, “Elevate.” Like something out of Star Wars, wheels with robotic legs allow users to drive, walk or even climb over the most treacherous terrain.

 

In times of disaster, those first 72 hours are of upmost importance. Often the search-and-rescue missions are impeded due to the disaster . . . which led Hyundai to develop the first-ever vehicle with movable legs.

 

The design is capable of both mammalian and reptilian walking gaits, allowing it to move in any direction. The legs also fold up into a stowed drive-mode, where power to the joints is cut, and the use of an integrated passive suspension system maximizes battery efficiency. This allows Elevate to drive at highway speeds just like any other vehicle. But no other can climb a five foot wall, step over a five foot gap, walk over diverse terrain, and achieve a 15 foot wide track width, all while keeping its body and passengers completely level.

 

Design Manager, David Byron, offered a real-life scenario: “Imagine a car stranded in a snow ditch just 10 feet off the highway being able to walk or climb over the treacherous terrain, back to the road potentially saving its injured passengers – this is the future of vehicular mobility.”

 

 

 

source: hyundainews

Neo Mint: 2020

 

Renowned trend forecaster WGSN, headquartered in London, has declared their prediction for 2020’s color of the year: Neo Mint. Unlike Pantone, who issue the color for the upcoming year, WGSN takes a look into the distant future.

 

Their reason behind the color choice is because Neo Mint is a minted mint tone which combines technology, science, and nature. Additionally the color feels futuristic and is equally suitable for men and women.

 

In fashion, some designers such as Jil Sander and Dries van Noten, showed a similar color for the fall/winter season 2018/19. Neo Mint is a perfect sparring partner to other pastel shades, especially for spring, but the color is also doing well with all metallics.

 

For interiors, it was seen in the staging of Hermès and Mini show at the Salone del Mobile and Gucci’s “Gucci Clinic.”

 

 

sources: wgsn, textilwirtschaft

AI Identifies Objects at the Speed of Light

 

 

A team of UCLA electrical and computer engineers has created a physical artificial neural network — a device modeled on how the human brain works — that can analyze large volumes of data and identify objects at the actual speed of light. The device was created using a 3D printer at the UCLA Samueli School of Engineering.

 

Numerous devices in everyday life today use computerized cameras to identify objects — think of automated teller machines that can “read” handwritten dollar amounts when you deposit a check, or internet search engines that can quickly match photos to other similar images in their databases. But those systems rely on a piece of equipment to image the object, first by “seeing” it with a camera or optical sensor, then processing what it sees into data, and finally using computing programs to figure out what it is.

 

The UCLA-developed device gets a head start. Called a “diffractive deep neural network,” it uses the light bouncing from the object itself to identify that object in as little time as it would take for a computer to simply “see” the object. The UCLA device does not need advanced computing programs to process an image of the object and decide what the object is after its optical sensors pick it up. And no energy is consumed to run the device because it only uses diffraction of light.

 

New technologies based on the device could be used to speed up data-intensive tasks that involve sorting and identifying objects. For example, a driverless car using the technology could react instantaneously — even faster than it does using current technology — to a stop sign. With a device based on the UCLA system, the car would “read” the sign as soon as the light from the sign hits it, as opposed to having to “wait” for the car’s camera to image the object and then use its computers to figure out what the object is.

 

Technology based on the invention could also be used in microscopic imaging and medicine, for example, to sort through millions of cells for signs of disease. “This work opens up fundamentally new opportunities to use an artificial intelligence-based passive device to instantaneously analyze data, images and classify objects,” said Aydogan Ozcan, the study’s principal investigator and the UCLA Chancellor’s Professor of Electrical and Computer Engineering. “This optical artificial neural network device is intuitively modeled on how the brain processes information. It could be scaled up to enable new camera designs and unique optical components that work passively in medical technologies, robotics, security or any application where image and video data are essential.”

 

The process of creating the artificial neural network began with a computer-simulated design. Then, the researchers used a 3D printer to create very thin, 8 centimeter-square polymer wafers. Each wafer has uneven surfaces, which help diffract light coming from the object in different directions. The layers look opaque to the eye but submillimeter-wavelength terahertz frequencies of light used in the experiments can travel through them. And each layer is composed of tens of thousands of artificial neurons — in this case, tiny pixels that the light travels through. Together, a series of pixelated layers functions as an “optical network” that shapes how incoming light from the object travels through them. The network identifies an object because the light coming from the object is mostly diffracted toward a single pixel that is assigned to that type of object.

 

The researchers then trained the network using a computer to identify the objects in front of it by learning the pattern of diffracted light each object produces as the light from that object passes through the device. The “training” used a branch of artificial intelligence called deep learning, in which machines “learn” through repetition and over time as patterns emerge. “This is intuitively like a very complex maze of glass and mirrors,” Ozcan said. “The light enters a diffractive network and bounces around the maze until it exits. The system determines what the object is by where most of the light ends up exiting.”

 

In their experiments, the researchers demonstrated that the device could accurately identify handwritten numbers and items of clothing — both of which are commonly used tests in artificial intelligence studies. To do that, they placed images in front of a terahertz light source and let the device “see” those images through optical diffraction. They also trained the device to act as a lens that projects the image of an object placed in front of the optical network to the other side of it — much like how a typical camera lens works, but using artificial intelligence instead of physics.

 

source: UCLA

Sleeping with Masterpieces

 

Founded in 1824 and housing over 2300 pieces of art dating from the mid-13th century to the 1900s, The National Gallery showcases paintings by the world’s greatest masters, including Botticelli, Leonardo da Vinci, Rembrandt, Monet and Van Gogh.

 

In an exclusive collaboration, The National Gallery and Savoir Beds and have joined forces to deliver an inspiring approach that takes art in interiors to another level. Once commissioned, the design is specially printed in the UK on a selection of three fabrics – lustrous velvet, textured linen viscose and versatile cotton. Finished with a bespoke plaque detailing the portrait and artist, every commission will be personally approved by The National Gallery to guarantee the design preserves the essence and integrity of one of the greatest art institutes in the world.

 

 

 

sources: nationalgallery, avoirbeds, wikipedia

3D-Printed Concrete Dome for Mars

 

NASA has named Northwestern University and Skidmore, Owings & Merrill LLP (SOM) as one of the five leading teams in the latest phase of the 3D-Printed Habitat Centennial Challenge competition. Launched in 2014, the competition invited 18 research teams from around the world to design autonomously constructed habitats that have the potential to support human life on Mars and the Moon.

 

The Northwestern University/SOM concept—led by Northwestern, with SOM providing support on engineering and design—includes a 3D-printed concrete dome designed to be manufactured and printed on Mars. The printed material acts as a shell when completed, protecting inhabitants from solar radiation, wind-borne debris, and meteorite impacts. The material is placed over an inflatable formwork system that doubles as a pressure vessel simulating Earth’s atmosphere. The structure also incorporates modular connection points with entry airlocks suited for colony expansion.

 

Northwestern University will continue to lead the next phase of the competition, which will focus on 3D prints of the foundation and wall elements for the dome structure. Deadlines and milestones will be announced by NASA in the coming months.

 

 

 

sources: northwestern, som

Cell-Sized Robots

 

Researchers at MIT have created what may be the smallest robots yet that can sense their environment, store data, and even carry out computational tasks. These devices, which are about the size of a human egg cell, consist of tiny electronic circuits made of two-dimensional materials, piggybacking on minuscule particles called colloids.

 

Colloids, which insoluble particles or molecules anywhere from a billionth to a millionth of a meter across, are so small they can stay suspended indefinitely in a liquid or even in air. By coupling these tiny objects to complex circuitry, the researchers hope to lay the groundwork for devices that could be dispersed to carry out diagnostic journeys through anything from the human digestive system to oil and gas pipelines, or perhaps to waft through air to measure compounds inside a chemical processor or refinery.

 

Tiny robots made by the MIT team are self-powered, requiring no external power source or even internal batteries. A simple photodiode provides the trickle of electricity that the tiny robots’ circuits require to power their computation and memory circuits. That’s enough to let them sense information about their environment, store those data in their memory, and then later have the data read out after accomplishing their mission.

 

Such devices could ultimately be a boon for the oil and gas industry, Currently, the main way of checking for leaks or other issues in pipelines is to have a crew physically drive along the pipe and inspect it with expensive instruments. In principle, the new devices could be inserted into one end of the pipeline, carried along with the flow, and then removed at the other end, providing a record of the conditions they encountered along the way, including the presence of contaminants that could indicate the location of problem areas. Similarly, such particles could potentially be used for diagnostic purposes in the body, for example to pass through the digestive tract searching for signs of inflammation or other disease indicators.

 

source: MIT