An Idea for Aggregation of Student Online Artifacts Using Visual Rendering and Metadata Collection

Diagram of Visual Aggregation.

Diagram of Visual Aggregation. Click image above to view full resolution version.

This afternoon, I participated in an online reunion with my colleagues at Georgia Tech–Nirmal Trivedi, Pete Rorabaugh, Andy Frazee, and Clay Fenlason–about the first-year reading program, Project One.

During the conversation, I thought of this idea for aggregating student online work in a database and presenting student work through a website.

This builds on Pete’s ideas about dispersed exploration and fragmented student artifactual creation. So, if we have our students working online using any service, platform, or software, how can we bring their work together so that we can see and more importantly, they can see how their work fits together with the work of others? We can build a simple website that collects information (a URL, a brief, optional description, tags, and an affirmation that the content linked belongs to the student and is legal), generates a rendered image of the content, and presents those images as thumbnails with the collected information on a visually dynamic website that supports different ways of arranging aggregated content (by date, by dominant color, by tags, etc.). Beyond making these aggregated student artifacts available through the presentation website, the archive of rendered images and supporting metadata can be dispersed once the project is over (dispersing the archive–an idea I received from a conversation with Bob Stein of The Future of the Book project).

The image that leads this post illustrates my idea:

  1. Students login to a collection site with Active Directory (no new account needed). The collection website asks for the URL to the student’s work anywhere publicly available online, a brief description (not required–move this down the page and elevate tags), content tags or keywords (required), and a commitment that the content belongs to the student and is legal. The student’s name is automatically associated with the content after logging into the site with Active Directory.
  2. A service running on the site creates a JPG or PNG image of the rendered website URL supplied by the student, which is added to their content’s entry in the aggregation database. The site’s backend takes the URL, loads the URL in webkit, and captures the rendered page as  JPG or PNG. CutyCapt does this kind of work.
  3. On the public-facing side of the aggregation website, the students’ work is presented in either a grid of images (with ordering options based on dominant color, date of publication, tags) or a word cloud of tags (which can be clicked revealing the artifact thumbnails associated with that tag). Other possibilities can be concurrence between tags–visually depicting links between different tags, etc. On the visual presentation of artifacts, the square thumbnails enlarge as the user mouseovers each thumbnail to reveal a larger preview of the content, description, tags, student name, etc. (think of Mac OS X’s dock animation). There are lots of different ways to use visualization techniques and technologies to make the presentation of student work interesting, engaging, and layered with additional meaning and context.
  4. Finally, after the project is completed, the archive of student work exists online on the website and distributed among the students on flash drives. The content can be in directories for each aggregated student project, or a Java app that recreates the functionality of the website (or Java can be used on the presentation site, too–the website connects to an online database and the thumb drive version connects to the local database).

I am a professor of English at the New York City College of Technology, CUNY whose teaching includes composition and technical communication, and research focuses on 20th/21st-century American culture, science fiction, neuroscience, and digital technology.

Tagged with: , , , , , , , ,
Posted in Georgia Tech, New Media, Pedagogy
Who is Dynamic Subspace?

Dr. Jason W. Ellis shares his interdisciplinary research and pedagogy on DynamicSubspace.net. Its focus includes the exploration of science, technology, and cultural issues through science fiction and neuroscientific approaches. It includes vintage computing, LEGO, and other wonderful things, too.

He is an Assistant Professor of English at the New York City College of Technology, CUNY (City Tech) where he teaches college writing, technical communication, and science fiction.

He holds a Ph.D. in English from Kent State University, M.A. in Science Fiction Studies from the University of Liverpool, and B.S. in Science, Technology, and Culture from Georgia Tech.

He welcomes questions, comments, and inquiries for collaboration via email at jellis at citytech dot cuny dot edu or Twitter @dynamicsubspace.

Archives

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 847 other followers

Blog Stats
  • 484,270 visits