Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • Augmented Reality - there's a light form of this, which we've already seen with art projects like YellowArrow. A heavier form depends on wearable computing and intensive graphics rendering, which has been piloted, but isn't mature in 2005 (Bryan Alexander)
  • Haptics and other multi-modal technologies – gesture recognition, especially (Diana Oblinger) I think that gaze tracking, which is technologically simple and inexpensive will play an important role. We have working demos at IBM Research. (Jean Paul)
  • Next generation presence-awareness – your technology knows what you are doing, where you are, and delivers information to you based on that, eg. my phone is not ringing because it is linked to my calendar and knows I am in a meeting – but if my spouse were to call, that call would come through. (Diana Oblinger)
  • Seamless Connection of Student Owned technology transparent handoffs, authentication. Non computer devices begin to dominate as content access point (Alan Levine)
  • Next-generation folksonomic tools while the commercial tools (see above) are ready for use, there are important features (e.g., reputation systems, coupling to search engines) that they have not touched upon yet, and are essential to solving potential problems (e.g., folksonomic spam) and creating new academic uses (e.g., "living" knowledge repositories). (Ruben Puentedura)
  • Techniques to display complex documents on displays the size of a (large) stamp There are 1.5 billion cell phones worldwide, but only 400 million PCs. These phones could be the opportunity to get access to the functionality of a networked computer and to participate in the digital world by using artifacts (SW) that permit one to display and interact even with complex ocuments. (Jean Paul Jacob)
  • Gaze tracking Gaze information plays an important role in identifying a person's focus of attention. The information can provide useful communication cues to a multimodal interface. For example, it can be used to identify where a person is looking, and what he/she is paying attention to. On a computer screen, this can be used to understand what is of interest to the user. When looking at the first 10 hits of a total of one million hits in a search, the user's eyes will spend more time in the pages(hits) of interest, helping the system filter the remaining hits to choose those with the same key words of hits that the user spent time. (Jean Paul)
  • Personal-Social Information Management Tools - there are plenty of personal information managaement tools around but I haven't found one that really cuts it yet. These tools need to be able to switch between outline and visual representations and need to connect an indiviudal's information/knowledge with their communities of interest and practice, with whatever else is out there in the world. (Nick Noakes)

...

  • -Voice Lecture/seminar Translation & Indexing - Podcasting is currently the rage and we have the tools to take audio and stream them from various sites. What's missing is the abilty to do voice-to-text as easily AND to have semantically meaningful indices created that point you to where in the audio stream a given concept or important text segment is spoken. But it won't be too far from now. See: .Spoken Lecture Translation (Phil Long)

    • add your thoughts here

...