1. notes

    47 minutes ago

    collections that are raw as fuck on aura tout vu f/w 2014-15

    (via subobo)

    Lovely clothes

  2. notes

    16 hours ago

    fromobscuretodemure:

    Meng Huang, Kiki Kang and Liu Li Jie by Yin Chao for Harper’s Bazaar China September 2012.

    (via subobo)

    Lovely clothes

  3. notes

    18 hours ago

    
People will stare. Make it worth their while → Stéphane Rolland Haute Couture | F/W ‘12-‘13

    People will stare. Make it worth their while → Stéphane Rolland Haute Couture | F/W ‘12-‘13

    (via subobo)

    Lovely clothes

  4. notes

    20 hours ago

    fabledquill:

    bogleech:

    colorsoffauna:

    Silky anteater (Cyclopes didactylus)

    This is actually also why the more popular “Giant Anteater” has “Giant” in its name. This is the “regular” anteater.


    I bet some of you did not even know there was a regular anteater.

    Much less that it was obviously designed retroactively by the angel of Jim Henson.

    designed retroactively by the angel of Jim Henson.

    (Source: , via subobo)

    cute animals

  5. notes

    22 hours ago

    mindblowingscience:

Ethical trap: robot paralysed by choice of who to save

Can a robot learn right from wrong? Attempts to imbue robots, self-driving cars and military machines with a sense of ethics reveal just how hard this is
CAN we teach a robot to be good? Fascinated by the idea, roboticist Alan Winfield of Bristol Robotics Laboratory in the UK built an ethical trap for a robot – and was stunned by the machine’s response.
In an experiment, Winfield and his colleagues programmed a robot to prevent other automatons – acting as proxies for humans – from falling into a hole. This is a simplified version of Isaac Asimov’s fictional First Law of Robotics – a robot must not allow a human being to come to harm.
At first, the robot was successful in its task. As a human proxy moved towards the hole, the robot rushed in to push it out of the path of danger. But when the team added a second human proxy rolling toward the hole at the same time, the robot was forced to choose. Sometimes, it managed to save one human while letting the other perish; a few times it even managed to save both. But in 14 out of 33 trials, the robot wasted so much time fretting over its decision that both humans fell into the hole. The work was presented on 2 September at the Towards Autonomous Robotic Systems meeting in Birmingham, UK.
Winfield describes his robot as an “ethical zombie” that has no choice but to behave as it does. Though it may save others according to a programmed code of conduct, it doesn’t understand the reasoning behind its actions. Winfield admits he once thought it was not possible for a robot to make ethical choices for itself. Today, he says, “my answer is: I have no idea”.
As robots integrate further into our everyday lives, this question will need to be answered. A self-driving car, for example, may one day have to weigh the safety of its passengers against the risk of harming other motorists or pedestrians. It may be very difficult to program robots with rules for such encounters.
But robots designed for military combat may offer the beginning of a solution. Ronald Arkin, a computer scientist at Georgia Institute of Technology in Atlanta, has built a set of algorithms for military robots – dubbed an “ethical governor” – which is meant to help them make smart decisions on the battlefield. He has already tested it in simulated combat, showing that drones with such programming can choose not to shoot, or try to minimise casualties during a battle near an area protected from combat according to the rules of war, like a school or hospital.
Arkin says that designing military robots to act more ethically may be low-hanging fruit, as these rules are well known. “The laws of war have been thought about for thousands of years and are encoded in treaties.” Unlike human fighters, who can be swayed by emotion and break these rules, automatons would not.
"When we’re talking about ethics, all of this is largely about robots that are developed to function in pretty prescribed spaces," says Wendell Wallach, author ofMoral Machines: Teaching robots right from wrong. Still, he says, experiments like Winfield’s hold promise in laying the foundations on which more complex ethical behaviour can be built. “If we can get them to function well in environments when we don’t know exactly all the circumstances they’ll encounter, that’s going to open up vast new applications for their use.”
This article appeared in print under the headline “The robot’s dilemma”

Watch a video of these ‘ethical’ robots in action here

    mindblowingscience:

    Ethical trap: robot paralysed by choice of who to save

    Can a robot learn right from wrong? Attempts to imbue robots, self-driving cars and military machines with a sense of ethics reveal just how hard this is

    CAN we teach a robot to be good? Fascinated by the idea, roboticist Alan Winfield of Bristol Robotics Laboratory in the UK built an ethical trap for a robot – and was stunned by the machine’s response.

    In an experiment, Winfield and his colleagues programmed a robot to prevent other automatons – acting as proxies for humans – from falling into a hole. This is a simplified version of Isaac Asimov’s fictional First Law of Robotics – a robot must not allow a human being to come to harm.

    At first, the robot was successful in its task. As a human proxy moved towards the hole, the robot rushed in to push it out of the path of danger. But when the team added a second human proxy rolling toward the hole at the same time, the robot was forced to choose. Sometimes, it managed to save one human while letting the other perish; a few times it even managed to save both. But in 14 out of 33 trials, the robot wasted so much time fretting over its decision that both humans fell into the hole. The work was presented on 2 September at the Towards Autonomous Robotic Systems meeting in Birmingham, UK.

    Winfield describes his robot as an “ethical zombie” that has no choice but to behave as it does. Though it may save others according to a programmed code of conduct, it doesn’t understand the reasoning behind its actions. Winfield admits he once thought it was not possible for a robot to make ethical choices for itself. Today, he says, “my answer is: I have no idea”.

    As robots integrate further into our everyday lives, this question will need to be answered. A self-driving car, for example, may one day have to weigh the safety of its passengers against the risk of harming other motorists or pedestrians. It may be very difficult to program robots with rules for such encounters.

    But robots designed for military combat may offer the beginning of a solution. Ronald Arkin, a computer scientist at Georgia Institute of Technology in Atlanta, has built a set of algorithms for military robots – dubbed an “ethical governor” – which is meant to help them make smart decisions on the battlefield. He has already tested it in simulated combat, showing that drones with such programming can choose not to shoot, or try to minimise casualties during a battle near an area protected from combat according to the rules of war, like a school or hospital.

    Arkin says that designing military robots to act more ethically may be low-hanging fruit, as these rules are well known. “The laws of war have been thought about for thousands of years and are encoded in treaties.” Unlike human fighters, who can be swayed by emotion and break these rules, automatons would not.

    "When we’re talking about ethics, all of this is largely about robots that are developed to function in pretty prescribed spaces," says Wendell Wallach, author ofMoral Machines: Teaching robots right from wrong. Still, he says, experiments like Winfield’s hold promise in laying the foundations on which more complex ethical behaviour can be built. “If we can get them to function well in environments when we don’t know exactly all the circumstances they’ll encounter, that’s going to open up vast new applications for their use.”

    This article appeared in print under the headline “The robot’s dilemma”

    Watch a video of these ‘ethical’ robots in action here

    (via subobo)

    robots

  6. notes

    1 day ago

    ed-pool:

    "I want my father back, you son of a bitch"

    "And for a moment, he was alive. And my fairy tale came true."

    (via subobo)

    Princess Bride

  7. notes

    1 day ago

    febricant:

    imagehoarder:

    reneeruinseverything:

     YEEEESSSSS OH LORD YESSSS YES YES YES 

    PAGING hellotailor

    (via subobo)

    lovely clothes

  8. notes

    1 day ago

    mythicarticulations:

    Who’s a good boy? You’re a good boy!
    Who devours the flesh of mortals? You devour the flesh of mortals!

    Poseable “Cerberus in a Can” now available in our Etsy shop.

    (via subobo)

    I want one

  9. notes

    1 day ago

    tamorapierce:

    prairie-homo-companion:

    this is from a real diary by a 13-year-old girl in 1870. teenage girls are awesome and they’ve always been that way.

    I hope Bessie’s father didn’t nip her interests in the bud.

    (Source: eudaemaniacal, via subobo)

    this is pretty great

  10. notes

    1 day ago

    Rina Takeda [x]

    (Source: 0ci0, via subobo)

    uh so cute

  11. notes

    2 days ago

  12. notes

    2 days ago

    zealous4fashion:

    On Aura Tout Vu Haute Couture Fall Winter 2014/15 Collection

    (via subobo)

    Lovely clothes

  13. notes

    2 days ago

    monocromas:

    deathrock:

    becausebirds:

    The blackest bird there ever was. It’s black on the outside from head to toe, and black on the inside with its meat and organs.

    It’s called the Ayam Cemani from Indonesia, and they’re $2,500 a pop. Their bones are black, too. The only part of them that’s not black is their blood 

    That’s metal.

    (via subobo)

    amnimals

  14. notes

    2 days ago

    kgschmidt:

    socialpsychopathblr:

    By Salar Kheradpejouh  

    Hello Reference.

    (via subobo)

  15. notes

    2 days ago

    salparadisewasright:

estufar:

An actual headline from The New York Times in 1919 


I love this so much.

    salparadisewasright:

    estufar:

    An actual headline from The New York Times in 1919 

    I love this so much.

    (via subobo)