Paul's Human-Robot Interaction Blog





Wednesday, January 15, 2014 — 3:44 P.M.

This post has three exciting parts. One's about atheltic robots, the second is about learning to walk, and the third is about falling in love and futuristic technology. Enjoy!


Today's Wordless News is about robot soccer! It references a story from NPR involving an interview of Professor Peter Stone about humanoid robots playing soccer. The links are definitely worth checking out!


My friend Dustin shared a video with me yesterday. The video is called "Flexible Muscle-Based Locomotion for Bipedal Creatures." It's a project that simulated how two-legged creatures walk. Some of them look human, some look like animals or imaginary creatures. What's really interesting to me is to see how some of them use tails for balance. The paper is quite interesting. There's a lot of math explaining the algorithms that were optimized during the iterations shown in the video. You can also check out the thread on reddit, which has many interesting comments.


This past weekend, I saw the movie Her. It's about a man who falls in love with his artificially intelligent operating system. I recommend the movie; it definitely gave me a lot to think about. It's interesting to me to think about whether or not it is possible that a machine pass the Turing test and be indistinguishable from human interaction. If it is possible, would falling in love with it be a good idea?

The movie also made me think about the future. And when I think about the future of technology, I can't help but think about The Last Question, by Isaac Asimov. It's a short story that I highly recommend.


Top of page

Thursday, December 5, 2013 — 9:16 P.M.

Today, I had the awesome opportunity to see some autonomous robots driving around the quad at Marquette University. The Embedded Systems Design class built and programmed them. An Android app is used to send commands to the robot. The students were able to use buttons and the tilt feature to control the robot directly, or click on a map in order to tell the robot where to drive. The robots used GPS and compass data from a cell phone to drive to the waypoints. The robots were controlled by VEXpro ARM microcontrollers.

Here are pictures of two of the robots. Notice the techniques used to protect the phone that works as the "brain" of the robot from magnetism from the motors.

This is the kind of project that makes it awesome to be an engineer!

@MarquetteU retweeted me about this project!


I'm proud to be featured in a "Student Success" story on Marquette's website about my recent presentation at a robotics conference in England.


Since I like to share links on this blog, here's a link to the Pennsylvania State University Robotics Club RoboWiki. It looks to have all kinds of interesting pages that can inspire or guide people who are interested in robotics.


Top of page

Saturday, October 5, 2013 — 8:00 P.M.

I recently read about a National Science Foundation project called "Socially Assistive Robotics." It reminded me of the paper I co-authored entitled "Computational Awareness in a Tactile-Responsive Humanoid Robot Comedian" that will be presented at the IEEE SMC 2013 conference in about a week and a half. It's great to see technology that can respond to people, rather than simply accomplishing specific tasks.

Here's a link to a really cool article that I read: MIT news: Surprisingly simple scheme for self-assembling robots. The video is definitely worth watching. The little robots jump around; it almost looks like they are playing together! Wouldn't it be cool to have little robots like these to entertain you? I'm imagining kids toys that can interact with each other. Also, if you have a bunch of robots like these, they could work together to make something or build something. Imagine that!


Top of page

Summer post #1 — Sunday, July 14, 2013 — 6:01 P.M.

The semester is over, but I still learn things about robots that I feel would make for interesting blog posts.

Robots are constantly crawling around the web. Web crawlers follow links and are often used to update search engines like Google. Although not "mandatory," it is convention for websites to have a robots.txt file in the root of their heirarchy. This tells web robots/crawlers what information they are allowed to access. The humorous example that inspired me to bring up the significance of a robots.txt file may be found here.

A friend recently shared a link with me that talks about a Kickstarter project called RAPIRO. It's a robot that's designed to be powered by a Raspberry Pi computer. It has 12 servo motors and is designed to be easy to assemble. The designers plan on releasing the 3D data to enable others to customize their RAPIRO robot. This is awesome, because 3D printing is becoming cheaper and small, fun robots like this could be great to teach programming, engineering, and robotics in a fun way.

Another cool system that combines Raspberry Pi with robotics is BrickPi. It combines Raspberry Pi and Lego Mindstorms. As a big fan of both Lego and Raspberry Pi, this sounds awesome. Lego allows for so much creativity for the physical structure and motion, and Raspberry Pi allows for a brain all the power of Linux.

Lastly, I wanted to share a video I watched recently. This YouTube video shows Nao robots that play soccer together. It's really entertaining to see them all on the field, but one of my favorite parts is how they track each other. Starting at about 1:00 in the video, the video is shown side-by-side with a computer simulation of how they track their teammates and opponents. It's also great to watch the goalie block a shot (around 2:00).


Top of page

Week 13 — Tuesday, April 16, 2013 — 11:26 A.M.

I can't believe it's been four weeks since my last blog post. Time flies when you're a senior in college.

First, I want to share a link: TED talk: Keller Rinaudo: A mini robot -- powered by your phone. It's a 5-minute-and-50-second video about a small robot that has a personality, an iPhone "brain," and can function just like "Skype on wheels" or a telepresence robot. It's also relatively inexpensive for a robot.

Now for what I really want to blog about: some of my own thoughts and ideas about robotics and AI.

I was thinking one day how cool it would be to give robots projectors. Sometimes when I'm trying to express an idea, especially like a description of a person or something else physical, I wish I could just project my thoughts. It would be really cool to be able to communicate via mental images. Likewise, robots who are teachers or friends could use projectors to show people pictures or videos. They could even take pictures or video during human-robot interaction. Can you imagine a robot that says, "You should have seen the look on your face!" and then projects a picture for you to see? I think it would be awesome.

The second idea I wanted to share came about when I was in my Compiler Construction class. We are writing a software compiler that translates from Java, a programming language, into machine language, which is made up of 0s and 1s. It's fairly close to translating human "speech" to computer "do." In a way, compilers are a really close link between our brains and computers/AI/robots. If a robot has a powerful "human speech compiler" in its brain, it can parse the grammar of human speech and understand it. Making this connection in my head was powerful... I like when my classes have connections to each other. They challenge me to think in new directions. And isn't thinking in new ways the goal of college?


Top of page

Week 9 — Friday, March 22, 2013 — 3:22 P.M.

I read an interesting article yesterday. It was entitled "Will robots create jobs or end them?" The premise was essentially whether robots will leave many humans jobless. Robots like Baxter (that I talked about in my week 6 blog entry) are becoming better at doing work that was once restricted to humans. My opinion (and this is strictly my own) is that human creativity will find more jobs. After all, someone is needed to train the robots and do maintenance.

At one point, humans had to do taxes by hand. Nowadays, many people use software that takes most of the difficult work away. This gives them more time to do other, more interesting things. I'm not saying that nobody finds taxes to be fun, but my main point is that technology by definition helps people accomplish tasks. Tax software and robots can do things that humans find tedious or boring. Humans can still do these tasks or help the robots to, but this allows people to get creative and find new ways to employ their bodies and minds. Why keep doing the same jobs that humans have always done, when there are new ways that are easier or more efficient ways to get them done?

At the end of spring break last week, I visited UW Madison. During the visit, I checked out their human-robot interaction lab. I volunteered for a demonstration involving a Mitsubishi Wakamaru robot. The big yellow robot talked to me and used hand gestures. Check out the picture below! The Wikipedia entry on this type of robot says that it runs Linux, just like the Nao robots, though the students in the lab said it was a bit older. I still prefer the Nao, though this robot was comfortable to interact with, because it was at about eye level when I was sitting down.


Top of page

Spring Break — Tuesday, March 12, 2013 — 11:02 P.M.

How did you get into artificial intelligence?
Seemed logical -- I didn't have any real intelligence.

So, it's spring break. I've been relaxing. Still, I can't help squeeze in a little learning and a little robotics. I read an article on machine learning. It was definitely more of a business-perspective article, and less technical than I had expected (I mean, hoped). Still, it was at least somewhat relevant to human-robot interaction. After all, machine learning is more efficient than coding every single piece of information into a system. Anyway, I'm not sure I would recommend the article for our class to read. However, I did learn something that I wanted to share. The article talked about precision and recall. Essentially, these terms describe the accuracy or effectiveness of machine learning. Precision refers to "retrieved instances that are relevant," whereas recall is "relevant instances that are retrieved." Machine learning algorithms make deductions, and precision and recall are ways of measuring how relevant those deductions are to the expected output.

The other day I watched Robot and Frank, the movie that we watched a preview about in class one day. I thought it was a pretty good movie. One of my favorite quotes from the robot was, "I have a holographic array memory. If you delete half of my memory, I will still have all of my memories, only in lower resolution." It turns out that holographic data storage is actually a real thing. Here's a link to a paper about holographic memory. After skimming both links, I have a general idea that this is new technology that is still being explored, but I am still not quite sure I understand how it works. What I do know reminded me of when we talked about the human brain and memory during a discussion in class. I feel like a "holographic array memory" would lend itself towards improvements for artificial intelligence. For example, maybe this would involve structuring memory in a different way that improves sorting and searching algorithms. Whatever the case may be, I like how perceptions about robots are changing (and this is evident through movies, etc.). I also like how our culture is thinking more about how this stuff works. Robots aren't so much magic, but technology that is structured in different ways than people. Robots may be anthropomorphized in many ways, but there are still ways in which they are unique from other forms of life or technology.

Other links of interest

Kipper the dog — The Robot — Funny video that I watched this with my 5-year-old brother. It's technically not human-robot interaction, but animal-robot interaction. Anthropomorphized animals, that is.

Quadrocopter Pole Acrobatics — Really cool two-minute YouTube video where quadcopters (helicopter with four rotors) play catch. The narrator talks about machine learning.

Raspberry Pi-powered robot (video) — This is a longer video (about 8.5 minutes) about a robot that is controlled by a Raspberry Pi computer. You can control this robot online!

Top of page

Week 7 — Thursday, February 28, 2013 — 4:17 P.M.

It's hard for me to believe it's already week 7. Tomorrow is the first day of March! This week in my blog, I want to get back to talking about robotics/class, and will not talk so much about robots and AI in entertainment and the media. However, I do want to include the link to the video I showed in class. Here's the link to watch Nao robots dance. It's a nice video, though more sound/music would be nice, and I feel the demonstrator (who, incidentally, is the director of Aldebaran Robotics) should have given an introduction to what he is presenting, but I digress.

This week, we talked about imitation and "Programming by Demonstration." I've been thinking about what this would entail. Dustin and I had talked about how the Kinect or other forms of motion capture can be used to record physical human motion and then be played back using computer animation.

I wonder if we can simplify the system in a couple of ways. This might be possible to accomplish Programming by Demonstration or with the Nao, or at least have her imitate our motions.

Here are my ideas:

  1. Have the Nao start by facing a blank wall
  2. Have the user make obvious motions
  3. Have the Nao recognize shapes of the body (like a T shape when the user's arms are pointing out)
  4. Program the Nao to perform a pre-programmed, recognized shape/action

Anyway, this is just an idea. I just wanted to start thinking about it. I know there is a risk inherent with posting my ideas online, but I figure maybe I can inspire someone else or gather feedback.

Top of page

Week 6 — Thursday, February 21, 2013 — 1:01 P.M.

You may have noticed that I skipped a week in my blog. I kept putting it off last week, but at the same time, I was adding things to the list of "cool/interesting robotics-related-things" that I wanted to either discuss here or maybe bring up in class.

In the past two weeks, I was busy! I finally watched Bicentennial Man that several people in class have referenced during discussion. I also watched Ghost in the Shell, which is an anime science fiction film. I fail to recall where I heard about it. I'm pretty sure this was the first anime film I've ever watched, and it was great because it related to artificial intelligence and humanoid robots.

What made Ghost in the Shell particularly interesting to me was that the characters were a mix of human and robotic parts. One of the main characters asked of another, "how much human are you?" Most of them had cybernetic enhancements to their bodies, including their brain. The "ghost" that the title refers to was essentially a sentient computer program that was in search of a body.

During lunch today, I watched the first episode of Ultraman, a TV series that Dr. Williams had mentioned in class. I enjoyed it, but want to watch a few more episodes before I relate it back to human-robot interactions. By the way, you can watch the complete series here (on Hulu.com).

Somehow, I balanced my robot entertainment with schoolwork, which included programming a Nao robot for the Open House! You can learn more at our project website.

There are a couple more things I wanted to mention in my blog. I read an article from the Chicago Tribune about Baxter, a robotic co-worker who is about the size of a human and can be taught repetitive tasks in an industrial setting. Baxter has an LCD face screen capable of many expressions. I find this so exciting. It's another step in human-robot interaction, allowing humans to train robots to perform mundane activities. Some might argue that robots like this can take human jobs away, but I would argue that this enables humans to open up a new door of jobs to pursue. More information about Baxter is available on his website. The print copy of the article I read also included this graphic about Baxter's capabilities and cost.

Finally, I wanted to post a link to the Aldebaran website's python development documentation page. I've been learning about programming the Nao in python, and I must say that I'm excited. I like python, and feel that as awesome and fun as the Choregraphe software is, using python will give us more power and flexibility for programming the Nao robot.

Top of page

Week 4 — Thursday, February 7, 2013 — 2:48 P.M.

This week, outside of class, I saw a preview for a movie called The Prototype. The movie is about a humanoid drone that escapes from a lab, and is being hunted by the FBI/military. Something that may or may not be obvious from the trailer is that the "software" of the robot is actually a human who somewhow downloaded himself into the robot. This relates to our class, where we were talking about the Wizard of Oz (WoZ) methodology. I would call this "Wizard unified with Oz" because the robot and human are joined together in the same device/system. The human will experience exactly the same things that the robot will, because the two are, in fact, one.

If a human could download themselves into a robot, we could learn a lot from it. If a human's consciousness could be put into the hard drive/computer that powers the robot's brain, it would essentially mean that there is some way to describe human intelligence through machine code or computer language.

The movie The Prototype is set to come out later this year. You can watch the trailer here (YouTube). There is an interesting discussion of the trailer/movie concept here, on reddit. It's from a subreddit (sub-community on reddit.com) called /r/Futurology, which is also worth checking out. Interesting links and ideas relating to the future and technology are posted and discussed.

I also wanted to share a link to an article about the ULTra PRT (Personal Rapid Transit) pods/robot taxis that are now fully operational at London's Heathrow airport. They are small vehicles that may be hailed by travelers at any time. They are fully autonomous, and present two major advantages. For one, they benefit the users who no longer have to wait for a human-controlled shuttle to arrive. They also save energy, according to the article, as they are fully electric. The full article may be found here.

This concept leads me to envision a robot-filled future similar to the Star Wars Universe. Rather than simply make robots that are exactly like humans, it would behoove us to develop a variety of different types of robots that fulfill different purposes. Perhaps a sentient, humanoid robot would be valuable as an assistant in the home or office setting, but a transportation robot could be more like a drone that has preprogrammed destinations and doesn't necessarily need to engage users in converstation. Of course, "taxi cab conversations" can be useful/interesting. // Who knows where robotics will take us?

Top of page

Week 3 — Thursday, January 31, 2013 — 12:44 P.M.

Yesterday, my group led the panel discussion at the beginning of class. I started by talking a little bit about The Last Question, a short story by Isaac Asimov. I'm not going to ruin the story here, but I will say that artificial intelligence plays a major role. The story may be found here. I found the story via this reddit post, which has some interesting discussion in the comments section.

During the discussion, my group and I talked about animation principles applied to 3D animation, as well as "the intentional strategy." The class definitely prefered the animation article, but I feel there is a lot to be gained from both. It's interesting to me how animation, which usually makes me think of a screen with moving pictures, can relate to robots. Their motion is, by definition, animated. They are "given life" by human creators, and so any and all motion was at one point designed.

The intentional strategy was certainly a philosophical reading that may be difficult to get through. Out of this article, I found a deeper level of thinking about intention versus design, and free will versus a predetermined course of events. Up to this point, robots follow their programming. Any variation is due to bugs. I don't feel that robots are yet capable of using instinct or intent. However, I wonder if the use of neural networks can bring them closer to having actual intentions. For example, consider that a robot is given a set of possible actions or responses, and a set of when they are approproate. And consider that it can change these sets based on positive reinforcement or voice commands from a user. They can then use their neural networks to determine possible responses to unspecified situations. Then I suppose the robot would have something closer to "intention" or decision-making.

Here's a link to the Pixar short film that was showed in class: Paperman. At the end, I noticed the name "John Lasseter" in the credits. He had written the article on 2D/3D animation that we discussed in class!

Top of page



Week 2 — Wednesday, January 23, 2013 — 11:27 P.M.

Well, the second week was an easy week. We had off on Monday for Martin Luther King, Jr. Day.
Today was a 18-minute class, so we didn't discuss much. However, I really enjoyed watching the movie I watched, which was Short Circuit. Honestly, I expected the movie to be boring and outdated. But I laughed out loud during several of the scenes. The robot was cute, kind of like Wall-E, and was REALLY smart!

After class, Dustin and I talked about our Midterm Project. We will have to discuss it as a group, but we came up with some good ideas. I'm not sure exactly where we'll go. I personally hope to be using the Nao robot, but I'm sure that whatever we do, we will both enjoy it and get the project done.

There was a lot of reading for this Wednesday, but I personally am glad for it. I learned a bit about animation, which may help me in Computer Graphics. And I learned about balancing my schedule. This week, I didn't do so well (I was up until three in the morning on Tuesday night working on the readings and homework for this class). But I know that reading these articles will only benefit me, and I will try to do better next week.
Something that I am curious about is how Dr. Williams comes up with all these articles. Does he read many robotics research journals? Or does he follow something like reddit.com/r/robotics?

Top of page



Week 1 — Tuesday, January 15, 2013 — 1:30 P.M.

The first day of class was thought-provoking. We viewed a segment of iRobot and discussed it. Someone brought up how Sonny seemed to display emotions/feelings. I think these emotions were evident in what he said, as well as his actions (hitting the table).

I realized later that Sonny's eyes also changed when he was angry. This anthropomorphized him even further. He is not just a robot, but a very human-like robot.

An idea that I have been thinking about is the difference between programming a robot with exactly what to do, and having the robot learn how to express itself. I believe it would be possible to design a robot to "act out" the scene we watched. The program would require the robot to change its appearance in order to look and sound angry. It would require complex programming, but it would be possible. What is more impressive is how Sonny had the facilities to show emotion (eyes, movable face, etc.), but had to learn to express feelings himself. He wasn't just programmed; he also decided how to act.

I'm not sure how the ability to express one's self would be programmed. My point is that there is a difference between telling the robot to call the getAngry() function and allowing the robot to use it as he/she sees fit.

The question is, how would we write code that would give robots the ability to freely express themselves? Us humans are capable of a great range of expression. We program ourselves to use our facial expressions to show emotion, and have so much choice in how we behave and what we say. How do we impart this ability to machines, that essentially just do whatever we tell them to do?

Top of page







Back to paulkaefer.com