Tag: Artificial Intelligence

  • Reflections on a Month of LinkedIn Learning

    Photo of a business cat taking notes in his office. Image created with Stable Diffusion.
    Photo of a business cat taking notes in his office. Image created with Stable Diffusion.

    As I wrote at the beginning of July here, I planned to take advantage of LinkedIn Learning’s free one-month trial. I wanted to report back on my experience of taking LinkedIn Learning courses and provide more details about some of my tips that might help you be more successful with LinkedIn Learning.

    Breakdown of the Courses and Learning Paths

    LibreOffice Calc spreadsheet showing Jason's LinkedIn courses and time totals.

    I created the spreadsheet above in LibreOffice Calc as a list of all of the courses I had completed between June 29 and August 3 (I’m including the end of June courses in the free Career Essentials in Generative AI by Microsoft and LinkedIn that gave me the idea to continue with the free one month trial period). I included the instruction time for each course. This allowed me to calculate that I had completed 43 hours 11 minutes of course instruction across 39 courses during my LinkedIn Learning trial period.

    I regret not keeping track of how long I spent on each course, which was far longer due to pausing the video to write notes, studying notes, taking quizzes, writing assignments, and taking exams. I believe the 50% extra time per course that I wrote about in July holds true.

    I focused on two main areas: Generative AI, which I am building into my workflows and maintaining a pedagogical bibliography for here; and Diversity, Equity, and Inclusion (DEI) Communication Best Practices, which I wanted to use to improve my teaching practices by structuring my classroom as supportive and welcoming to all students.

    In the Generative AI courses, I learned about machine learning, different forms of generative AI, how generative AI is integrated (or being integrated) into local and server software, and frameworks for critique of AI systems in terms of ethics, bias, and legality. Also, I took some courses on Python to get an inkling of the code underpinning many AI initiatives today.

    In the DEI Communication Best Practices cluster of courses, I learned helpful terminology, techniques for engagement, what to do to support and include others, and how to be an ally (mostly with an emphasis on the workplace, but thinking about how to leverage these lessons in the classroom). These courses covered combating discrimination, planning accessibility from the beginning and benefit of all, and supporting neurodivergence.

    Overall, each learning experience was beneficial to my understanding of the topic. However, some instructors delivered better courses–for my way of learning–by employing repetition, anchoring key topics with words and definitions on the video (which you can pause and write down), giving more quizzes over shorter amounts of material (instead of fewer quizzes over longer time spans of material), and giving students mini projects or assignments to reinforce the lesson (e..g, pause and write about this, or pause the video, solve this problem, and “report back”–the course isn’t interactive but the “report back” idea is to compare your solution to the instructor’s after the video is played again).

    All of the courses provide a lot of information in a very short amount of time. In some cases, the information compression is Latvian repack level. Even taking notes in shorthand, I could not keep up in some instances. To capture all of the information, I had to pause videos repeatedly, repeat (using the 10 second reply often) and read the transcript.

    While I enjoyed the standalone courses, the Learning Paths provided a sequence and overlap in material that helped reinforce what was being taught. Also, Learning Paths helped me see connections between the broader implications of the topic (e.g., DEI, accessibility, neurodiversity, etc.) as well as explore certain aspects of the topic in more depth (e.g., how to approach conversations on uncomfortable topics or how to ask for permission to be an ally in a given situation).

    Each instructor has a unique way of speaking and engaging the learner. I really enjoyed the diversity of the instructors across all topics.

    The accessibility features built into LinkedIn Learning helped me follow along and make accurate notes. In particular, I always turned on closed captioning and clicked the “Transcript” tab beneath the video so that I could easily follow along and pause the video when there was a keyword or definition or illustration that I wanted to capture in my notes.

    LibreOffice Calc chart showing how many hours of courses were completed on the days between 6/29 and 8/3/2023.

    I added the course instruction time for those courses completed on the same day to generate the chart above that illustrates the ebb and flow of my course completion across the month. In some cases, I spread out the instruction across days to give myself enough time to learn and practice the topics being discussed (e.g., Python programming or Stable Diffusion image generation). There were other days that I paused my learning to work on my research or simply to take a break from learning.

    On LinkedIn Learning, some of the courses are grouped together into what are called Learning Paths, which yield a separate certificate of completion from the certificates that you earn for each individual course. In some cases, as in the Career Essentials in Generative AI by Microsoft and LinkedIn also includes an exam with a time limit (1.5 hours) that must be passed before the Learning Path certificate is given. About 50% or 21 hours 45 minutes of the 43 hour 11 minute course instruction time applied to five earned Learning Paths for me:

    • Career Essentials in Generative AI by Microsoft and LinkedIn, 3h 49m
    • Accessibility and Inclusion Advocates, 3h 18m
    • Diversity, Inclusion, and Belonging for All, 6h 16m
    • Responsible AI Foundations, 4h 15m
    • LinkedIn’s AI Academy, 3h 54m

    LinkedIn Learning Success Tips

    Overall, I want to reiterate the tips that I wrote about here for being successful at LinkedIn Learning–both in terms of how you learn and how you demonstrate what you have learned. Below are some reiterated tips with details based on my experience this past month.

    Be an Active Learner: Take Notes, Do the Exercises, and Complete the Quizzes

    Fanned out loose-leaf notes that Jason took during his LinkedIn Courses.

    The one thing that I would like to stress above all others is how important it is to treat a LinkedIn Learning course like a classroom learning experience. What I mean by that is that you need to set aside quality time for learning, free from distraction, where you can take notes and complete the exercises, and study what you’ve learned before taking quizzes or exams. Employing your undivided attention, writing your notes by hand in a notebook, and completing quizzes, exams, and assignments all contribute to your learning, integrating what you’ve learned with your other knowledge, and preparing yourself to recall and apply what you’ve learned in other contexts, such as in a class or the workplace.

    Unless you have eidetic memory, the fact is that you won’t learn a lot by passively watching or listening to courses. And even if you have photographic memory, all you will gain are facts and not the integration, connections, and recall that comes from using and reflecting on what you have learned.

    Remember to Add Certifications to Your LinkedIn Profile

    Jason Ellis's Licenses & Certifications section on his LinkedIn Profile.

    Remember to add each completed LinkedIn Course and Learning Path certification to your profile. They will appear in their own section as they do on mine shown above.

    Completed Courses and Learning Paths do not automatically appear on your profile (consider: someone might not want all of their training to appear on their LinkedIn Profile for a variety of reasons).

    To add a Course or Learning Path to your LinkedIn Profile, go to LinkedIn Learning > click “My Learning” in the upper right corner > click “Learning History” under “My Library” on the left > click the “. . .” to the right of the Course or Learning Path > click “Add to Profile” and follow the prompts.

    LinkedIn also gives you the option to create post on your Profile about your accomplishment, which you should opt to do. When you do this, it auto suggests skills that it will add to your Skills section of your Profile. You can have up to 50 skills on your profile, so keep track of what’s there and prune/edit the list as needed to highlight your capabilities for the kinds of jobs that you are looking for. More on Skills further down the page.

    Add Certifications to Your Resume or CV

    Excerpt image of Jason Ellis' CV. Link to CV below.

    As shown above and viewable on my CV here, I added links to my LinkedIn Course and Learning Path certifications in a dedicated section of my CV. In addition to the unique link to my certifications, I included the organization that issued it (i.e., LinkedIn), and the date of completion. You can do the same on your CV or resume.

    To get the link to a Course or Learning Path completion certificate, go to LinkedIn Learning > click “My Learning” in the upper right corner > click “Learning History” under “My Library” on the left > click the “. . .” to the right of the Course or Learning Path > click “Download certificate” > click “LinkedIn Learning Certificate” > toggle “On” under the top section titled “Create certificate link” > Click “Copy” on the far right.

    While you are here, you can download a PDF of your certificate for safe keeping at the bottom left of this last screen. You can add these PDFs to a professional portfolio or alongside a deliverable that you create based on the skills that you gained from that course to demonstrate your learning and mastery.

    Demonstrate Your Skills

    Jason Ellis' Skills section on his LinkedIn Profile.

    As I mentioned above, when you post about completing a course, LinkedIn Learning can autogenerate relevant skill terms to add to the Skills section on your Profile (as shown above on my Profile). When you have the spare time and focus, you should occasionally click on “Demonstrate skills” (you can do this without a LinkedIn Learning subscription). This gives you options for taking exams related to different skills that you’ve added to your Skills section of your Profile. If you pass, it provides some proof that you know something about that particular skill. Beware though: these exams can be tough. When I took the HTML exam, I discovered big gaps in what I knew from learning HTML years before without keeping up with changes to HTML in the intervening years. While I passed the exam, I made notes about those questions that I got wrong so that I knew what to learn more about to fill in those gaps.

    Also, some skills don’t have exams associated with them. In those cases, you may submit a video or essay to demonstrate your experience to potential recruiters or hiring managers. If you do this, you should plan it out, shoot and edit your video to give the best visual and auditory impression, or write and revise your essay so that it is of the highest professional quality.

    Is It Worth It?

    Looking back on what I learned, how I learned it, and who I learned it from, I’m glad that I invested the time and energy into a month of LinkedIn Learning. I’ve already started putting some of the lessons into practice (e.g., the generative AI and ethical AI courses), and I’m planning out how I will roll out the DEI approaches in my courses when I return to teaching in Fall 2024 (I am on sabbatical this academic year). In the future, I plan to pay for LinkedIn Learning when additional classes are available and I have the time to immerse myself in learning.

    If you’re looking to skill up, I think that LinkedIn Learning can be beneficial if you go into it with a learning and reflective mindset. This means that you are willing to invest your attention, time, energy, and thought to learning the course material, want to reflect on how what you learn connects to other things you’ve already learned through school and work experience, apply what you’ve learned to deliverables that demonstrate you have integrated what you have learned (e.g., a detailed post on your LinkedIn Profile, a blog post, a poster, a video, an addition to your professional portfolio, etc.), and reflect, preferably in writing, on what you’ve learned, how you applied it, what you would like to see yourself accomplish next, and how to take those next steps.

    As I said above, you likely won’t gain much by passively listening to LinkedIn Learning Courses while doing other things or being distracted by your environment. Invest in this form of learning and you will add to what you know and can do. In that spirit, it’s like my Grandpa Ellis used to tell me, “Jake, no one can take away your education!”

  • Mirrored Moment of Computing Creation: KPT Bryce for Macintosh

    Outer space scene rendered in KPT Bryce on Mac OS 7.5.5.
    Outer space scene rendered in KPT Bryce 1.0.1 on Mac OS 7.5.5.

    A conversation on LinkedIn yesterday with a former Professional and Technical Writing student about user experience (UX) and generative artificial intelligence (AI) technologies reminded me of the UX innovations around an earlier exciting period of potential for computers creating art: KPT Bryce, a three-dimensional fractal landscape ray trace rendering program for Mac OS released in 1994. It was one of the first programs that I purchased for my PowerMacintosh 8500/120 (I wrote about donating a similar machine to the Georgia Tech Library’s RetroTech Lab in 2014 here). Much like today when I think about generative AI, my younger self thought that the future had arrived, because my computer could create art with only a modicum of input from me thanks to this new software that brought together 3D modeling, ray tracing, fractal mathematics, and a killer user interface (UI).

    Besides KPT Bryce’s functionality to render scenes like the one that I made for this post (above), what was great about it was its user interface, which made editing and configuring your scene before rendering in an intuitive and easy-to-conceptualize manner. As you might imagine, 3D rendering software in the mid-1990s was far less intuitive than today (e.g., I remember a college classmate spending hours tweaking a text-based description of a scene that would then take hours to render in POVRay in 1995), so KPT Bryce’s easy of use broke down barriers to using 3D rendering software and it opened new possibilities for average computer users to leverage their computers for visual content creation. It was a functionality and UX revolution.

    Below, I am including some screenshots of KPT Bryce 1.0.1 emulated on an installation of Mac OS 7.5.5 on SheepShaver (N.B. I am not running SheepShaver on BeOS–I’ve modified my Debian 12 Bookworm xfce installation to have the look-and-feel of BeOS/Haiku as I documented here).

    KPT Bryce 1.0 program folder copied to the computer's hard drive from the KPT Bryce CD-ROM.
    KPT Bryce 1.0 program folder copied to the computer’s hard drive from the KPT Bryce CD-ROM.
    KPT Bryce 1.0 launch screen.
    KPT Bryce 1.0 launch screen.
    Basic scene randomizer/chooser. Note the UI elements on the lower window border.
    KPT Bryce initial scene randomizer/chooser. Note the UI elements on the lower window border.
    KPT Bryce's scene editor opens after making initial selections.
    KPT Bryce’s scene editor opens after making initial selections.
    KPT Bryce's rendering screen--note the horizontal dotted yellow line indicating the progression of that iterative ray tracing pass on the scene.
    KPT Bryce’s rendering screen–note the horizontal dotted yellow line indicating the progression of that iterative ray tracing pass on the scene.
    KPT Bryce rendering completed. It can be saved as an image by clicking on File > Save As Pict.
    KPT Bryce rendering completed. It can be saved as an image by clicking on File > Save As Pict.

  • All In on Artificial Intelligence

    An anthropomorphic cat wearing coveralls, working with advanced computers. Image generated with Stable Diffusion.

    As I wrote about recently about my summertime studying and documented on my generative artificial intelligence (AI) bibliography, I am learning all that I can about AI–how it’s made, how we should critique it, how we can use it, and how we can teach with it. As with any new technology, the more that we know about it, the better equipped we are to master it and debate it in the public sphere. I don’t think that fear and ignorance about a new technology are good positions to take.

    I see, like many others do, that AI as an inevitable step forward with how we use and what we can do with computers. However, I don’t think that these technologies should only be under the purview of big companies and their (predominantly) man-child leaders. Having more money and market control does not mean one is a more ethical practitioner with AI. In fact, it seems that some industry leaders are calling for more governmental oversight and regulation not because they have real worries about AI’s future development but instead because they are in a leadership position in the field and likely can shape how the industry is regulated through industry connections with would-be regulators (i.e., the revolving door of industry-government regulation in other regulatory agencies).

    Of course, having no money or market control in AI does not mean one is potentially more ethical with AI either. But, ensuring that there are open, transparent, and democratic AI technologies creates the potential for a less skewed playing field. While there’s the potential for abuse of these technologies, having these available to all creates the possibility for many others to use AI for good. Additionally, if we were to keep AI behind locked doors, only those with access (legally or not) will control the technology, and there’s nothing to stop other countries and good/bad actors in those countries from using AI however they see fit–for good or ill.

    To play my own small role in studying AI, using generative AI, and teaching about AI, I wanted to build my own machine learning-capable workstation. Before I made any upgrades, I maxed out what I could do with a Asus Dual RTX 3070 8GB graphics card and 64GB of RAM for the past few months. I experimented primarily with Stable Diffusion image generation models using Automatic1111’s stable-diffusion-webui and LLaMA text generation models using Georgi Gerganov’s llama.cpp. An 8GB graphics card like the NVIDIA RTX 3070 provides a lot of horsepower with its 5,888 CUDA cores and memory bandwidth across its on-board memory. Unfortunately, the on-board memory is too small for larger models or adjusting models with multiple LORA and the like. For text generation, you can layer some of the model on the graphic’s card memory and your system’s RAM, but this is inefficient and slow in comparison to having the entire model loaded in the graphics card’s memory. Therefore, a video card with a significant amount of VRAM is a better solution.

    Previous interior of my desktop computer with air cooling, 128GB RAM, and Asus Dual Geforce RTX 3070 8GB graphics card.

    For my machine learning focused upgrade, I first swapped out my system RAM for 128GB DDR4-3200 (4 x 32GB Corsair shown above). This allowed me to load 65B parameters into system RAM with my Ryzen 7 5800X 8 core/16 thread CPU to perform the operations. The CPU usage while it is processing tokens on llama.cpp looks like an EEG:

    CPU and memory graphs show high activity during AI inference.

    While running inference on the CPU was certainly useful for my initial experimentation and the CPU usage graph looks cool, it was exceedingly slow. Even an 8 core/16 thread CPU is ill-suited for AI inference in part due to how it lacks the massive parallelization of graphics processing units (GPUs) but perhaps more importantly due to the system memory bottleneck, which is only 25.6 GB/s for DDR4-3200 RAM according to Transcend.

    Video cards, especially those designed by NVIDIA, provide specialized parallel computing capabilities and enormous memory bandwidth between the GPU and video RAM (VRAM). NVIDIA’s CUDA is a very mature system for parallel processing that has been widely accepted as the gold standard for machine learning (ML) and AI development. CUDA is unfortunately, closed source, but many open source projects have adopted it due to its dominance within the industry.

    My primary objective when choosing a new video card was that it had enough VRAM to load a 65B LLaMA model (roughly 48GB). One option for doing this is to install two NVIDIA RTX 3090 or 4090 video cards with each having 24GB of VRAM for a total of 48GB. This would solve my needs for running text generation models, but it would limit how I could use image generation models, which can’t be split between multiple video cards without a significant performance hit (if at all). So, a single card with 48GB of VRAM would be ideal for my use case. Three options that I considered were the Quadro 8000, A40, and RTX A6000 Ampere. The Quadro 8000 used three-generation-old Turing architecture, while the A40 and RTX A6000 used two-generation-old Ampere architecture (the latest Ada architecture was outside of my price range). The Quadro 8000 has memory bandwidth of 672 GB/s while the A40 has 696 GB/s and the A6000 has 768 GB/s. Also, the Quadro 8000 has far fewer CUDA cores than the other two cards: 4,608 vs. 10,572 (A40) and 10,752 (A6000). Considering the specs, the A6000 was the better graphics card, but the A40 was a close second. However, the A40, even found for a discount, would require a DIY forced-blower system, because it is designed to be used in rack mounted servers with their own forced air cooling systems. 3D printed solutions that mate fans to the end of an A40 are available on eBay, or one could rig something DIY. But, for my purposes, I wanted a good card with its own cooling solution and a warranty, so I went with the A6000 shown below.

    nvidia A6000 video card

    Another benefit to the A6000 over the gaming performance-oriented 3090 and 4090 graphics cards is that it requires much less power–only 300 watts at load (vs ~360 watts for the 3090 and 450 watts for the 4090). Despite this lower power draw, I only had a generic 700 watt power supply. I wanted to protect my investment in the A6000 and ensure it had all of the power that it needed, so I opted to go with a recognized name brand PSU–a Corsair RM1000x. It’s a modular PSU that can provide up to 1,000 watts to the system (it only provides what it is needed–it isn’t using 1000 watts constantly). You can see the A6000 and Corsair PSU installed in my system below.

    new computer setup with 128GB RAM and A6000 graphics card

    Now, instead of waiting for 15-30 minutes for a response to a long prompt ran on my CPU and system RAM, it takes mere seconds to load the model on the A6000’s VRAM and generate a response as shown in the screenshot below of oobabooga’s text-generation-webui using the Guanaco-65B model quantized by TheBloke to provide definitions of science fiction for three different audiences. The tool running in the terminal in the lower right corner is NVIDIA’s System Management Interface, which can be opened by running “nvidia-smi -l 1”.

    text generation webui running on the a6000 video card

    I’m learning the programming language Python now so that I can better understand the underlying code for how many of these tools and AI algorithms work. If you are interested in getting involved in generative AI technology, I recently wrote about LinkedIn Learning as a good place to get started, but you can also check out the resources in my generative AI bibliography.

  • Updates to the Generative AI and Pedagogy Bibliography

    A cute humanoid robot writing at a desk with bookshelf in background. Image created with Stable Diffusion.

    Over the weekend, I made some significant updates to the Generative AI and Pedagogy Bibliography and Resource List page, which includes background, debates, teaching approaches, applications, disciplinary research, and a list of online resources. I started it as a place to organize my own research while sharing it back out to others.

    It now features a table of contents at the top of the page under the introduction.

    I added about 50 articles and books to the bibliography, which now contains 232 sources.

    And, I added three links to the resource list at the bottom of the page which brings it to 42 links.

    I will periodically add more entries to the list as my own research progresses. But, it’s important to note that this bibliography isn’t meant to be exhaustive.

  • Recovered Writing: Undergraduate SF Lab Project, “Development of AI in Science Fiction,” Fall 2004

    This is the twenty-eighth post in a series that I call, “Recovered Writing.” I am going through my personal archive of undergraduate and graduate school writing, recovering those essays I consider interesting but that I am unlikely to revise for traditional publication, and posting those essays as-is on my blog in the hope of engaging others with these ideas that played a formative role in my development as a scholar and teacher. Because this and the other essays in the Recovered Writing series are posted as-is and edited only for web-readability, I hope that readers will accept them for what they are–undergraduate and graduate school essays conveying varying degrees of argumentation, rigor, idea development, and research. Furthermore, I dislike the idea of these essays languishing in a digital tomb, so I offer them here to excite your curiosity and encourage your conversation.

    In 2002, I took Professor Lisa Yaszek’s Science Fiction class at Georgia Tech. It was an important milestone in my life’s journey, but at that time, I had not yet looked beyond possible career paths in IT or UX design. Then, in early 2004, Professor Yaszek organized a symposium in conjunction with the Georgia Tech Library on Mary Shelley’s Frankenstein. She invited the SF writer Kathleen Ann Goonan to visit campus and give a reading. At the time, I was in Professor Yaszek’s Gender Studies class and we had read some of Kathy Goonan’s work. I was hooked, and I read more of her novels before her arrival to campus. Then, during the day of her visit, I had the good fortune to speak with her and she was kind enough to give me the gift of her time and conversation.

    Later, during the symposium, I was able to speak with Georgia Tech’s former SF professor, Bud Foote. I had heard legends of him when I first started at Tech, but I was never able to take his SF class while he was still teaching. Luckily, I was able to hear him give a presentation for the symposium and talk to him afterward.

    After that day of talking with Kathy Goonan and Professor Foote, I told Professor Yaszek that I had made up my mind–I was going to make a career out of studying SF. Ten years later, here I am–an SF scholar doing postdoctoral work at my alma mater!

    I noticed that Professor Yaszek had a number of student researchers who helped with the Frankenstein symposium. In addition to organizing the event, they put together some cool research material on a website. I thought that was impressive, and I wondered if I could get involved with that kind of work.

    I can’t remember if I asked Professor Yaszek about this or if she told us about it in the Gender Studies class, but I learned that she was planning on a new PURA (Presidential Undergraduate Research Award) funded endeavor for undergraduate Tech students: the SF Lab. The goal for each student in the group would be to contribute 1) an introduction to a specific SF topic, 2) a linked bibliography on the SF topic selected, 3)  an annotated bibliography of important works featuring that topic found in the Georgia Tech Science Fiction Collection (formerly the Bud Foote Science Fiction Collection), and finally, 4) related resources at Tech being developed in the real world. I jumped at this opportunity and proposed to write an entry on artificial intelligence.

    After winning a PURA award for my project proposal, I worked with several other students to workshop our individual projects. We had weekly meetings for workshopping each part of the project. The introduction took longer than the other parts, because it involved more writing and integrated research. Each SF Lab researcher would bring printouts of his or her work to circulate with the others and Professor Yaszek. We would take the feedback, revise for the next week, and return with a new draft. It was a streamlined process that involved a lot of revision work, but I cannot thank Professor Yaszek enough for helping me integrate that kind of rigor into my revision processes. It has repaid me in spades over the years.

    The following is my SF Lab project on AI. Please note that the links might be outdated and/or dead.

    Jason W. Ellis

    Professor Lisa Yaszek

    SF Lab Independent Research Project for

    Fall 2004

    Development of AI in SF

    Part I – Introduction

    Artificial Intelligence (AI) is intelligence and self-awareness demonstrated by a physical but inorganic artifact.  AI researchers include experts from a coalition of diverse disciplines including computer science (software written for computer hardware) and psychology (unraveling the human software running on biological hardware).

    John McCarthy is credited as first coining the term “artificial intelligence” in the August 31, 1955 paper he coauthored, “The Dartmouth Summer Research Project on Artificial Intelligence.”  This research project took place in the Summer of 1956 and its proposal states in the first paragraph that “The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it” (1).  McCarthy’s definition continues to be the accepted broad definition of AI.  Science fiction (SF) authors internalized this definition in their works that involve AI.  Patricia S. Warrick explicitly states the human focus of AI built into McCarthy’s definition when she writes in her 1980 book, The Cybernetic Imagination in Science Fiction, “Artificial intelligence…attempts to discover and describe aspects of human intelligence that can be simulated by machines” (11).

    SF is the primary literature field in which authors explore stories about AI.  SF authors are generally concerned only with “strong AI” or self-aware, intelligent machines that mimic human cognition.  However, there are a few stories that address “weak AI” which are programs that act as if they are intelligent, but not self-aware.  SF authors have written about the possibilities of AI as well as the issues surrounding artificial intelligence.  There are three main types of AI stories:  analog dystopic AI (1872-1930), digital utopic AI (1930-1950), and digital dystopic AI (1950-Present).

    Analog dystopic AI stories first appear in the late 19th century and they are characterized by anxieties about the dangerous nature of analog machine intelligences (built of gears and cogs instead of transistors).  The first reference to machine intelligence occurs in Samuel Butler’s satire Erewhon (1872).  Butler accomplished his goal of satirizing the theory of evolution by applying evolution to machines.  These machines become self-aware and come to control man.  Other stories from this period involved automatons (mechanical men that displayed intelligence) that were built for an intellectual purpose such as playing chess.  An example of this is Ambrose Bierce’s “Moxon’s Master” (1894) which had a dystopic ending that involved the mechanical chess player killing its creator after being checkmated.  These dystopian stories of analog AI continued to dominate the first three decades of the 20th century.  Karl Capek’s R.U.R. (1921), which introduced the term “robot” to the English language, is a another prime example of this storytelling.

    American SF ignited in the 1930s with a shift to digital utopian stories that feature digital machine intelligences (e.g., positronic brains, transistors, and integrated circuits).  John W. Campbell’s story, “When the Atoms Fail” (1930) is the first to describe a machine that is unquestionably a digital computer (though not self-aware).  His next computer story, “The Last Evolution” (1932) is about a machine that has independent thought.  In the 1940s Campbell helped Isaac Asimov create the Three Laws of Robotics in his robot stories and Asimov establishes himself as “the father of robot stories in SF” (Warrick, 54).  These digital utopic AI stories present machines as predictable reasoning beings that follow rules that allow them to live and work with humans.  They do not explore the philosophical ramifications of the creation of artificial life.  Additionally, Asimov’s 1950 publication of I, Robot, which is a collection of his first robot short stories, can be said to be an end point to the digital utopic AI era.

    After World War II, SF authors wrote digital dystopic AI stories to explore questions concerning the ethics of a science and technology that produced the nuclear bomb (and the first digital computers).  Two notable works from the early part of this era are Arthur C. Clarke’s 2001:  A Space Odyssey (1968) and Philip K. Dick’s Do Androids Dream of Electric Sheep? (1968).  These authors place an emphasis on the philosophical and ethical conflicts that may develop when humanity creates new life in the form of artificial brains that mirror the human mind.  More recently, depictions of self-aware AIs have become extremely elaborate as the real world entered a much more computerized and inter-networked era.  William Gibson’s Neuromancer (1984) in particular and cyberpunk in general further expand the scope of digital dystopic AI stories by interlinking AI, cybernetics, and global capitalism.

    Thus, AI is a historically embedded concept in SF literature.  The science and technology behind AI has evolved from mere conjecture to a closer possibility.  Authors of AI stories take the science and technology of their historical moments and extrapolate the forms that AI might take.  Furthermore, AI authors discuss, both implicitly and explicitly, the philosophical and ethical issues that inevitably arise with new technology and more specifically with the creation of self-aware machines.

    Part II – Linked Bibliography

    A.  Theory and Criticism

    i.  Theory

    Kurzweil, Ray.  The Age of Intelligent Machines.  Cambridge, MA:  MIT Press, 1990.

    Link to:  http://www.amazon.com/exec/obidos/ASIN/0140282025/qid=1094574944/sr=ka-1/ref=pd_ka_1/104-3233143-6155107

    McCarthy, J., M. L. Minsky, N. Rochester, and C. E. Shannon.  “A Proposal for      the       Dartmouth Summer     Research Project on Artificial Intelligence.”  August 31,       1955.

    Link to:  http://www-formal.stanford.edu/jmc/history/dartmouth.html

    Minsky, Marvin.  The Society of Mind.  New York:  Simon and Schuster, 1988.

    Link to:  http://www.amazon.com/exec/obidos/tg/detail/-/0671657135/qid=1095016288/sr=8-1/ref=pd_cps_1/104-4983846-7328739?v=glance&s=books&n=507846

    Neumann, John von.  The Computer and the Brain.  New Haven, CT:  Yale University          Press, 1958.

    Link to:  http://www.amazon.com/exec/obidos/tg/detail/-/0300084730/qid=1095020573/sr=8-9/ref=sr_8_xs_ap_i9_xgl14/104-4983846-7328739?v=glance&s=books&n=507846

    Turning, A.M.  “Computing Machinery and Intelligence.”  Mind 59: 236 (1950):

    433-460.

    Link to:  http://www.abelard.org/turpap/turpap.htm

    ii.  Criticism

    Clute, John and Peter Nicholls, eds.  The Encyclopedia of Science Fiction.  New York:         St. Martin’s Press, 1995.

    Linkto:  http://www.amazon.com/exec/obidos/tg/detail/-/031213486X/qid=1095022402/sr=8-1/ref=sr_8_xs_ap_i1_xgl14/104-4983846-7328739?v=glance&s=books&n=507846

    Lem, Stanislaw.  “Robots in Science Fiction.”  SF:  The Other Side of Realism,      ed.       Thomas D. Clareson.  Bowling Green, KY:  Bowling Green University Popular      Press, 1971.

    Link to:  http://www.amazon.com/exec/obidos/tg/detail/-/0879720239/qid=1094574872/sr=1-1/ref=sr_1_1/104-3233143-6155107?v=glance&s=books

    Stork, David G.  ed.  HAL’s Legacy:  2001’s Computer as Dream and Reality.          Cambridge, MA:  MIT Press, 1996.

    Link to:  http://mitpress.mit.edu/e-books/Hal/

    Telotte, J.P.  Replications:  A Robotic History of the Science Fiction Film.  Urbana, IL:        University of   Illinois Press, 1995.

    Link to:  http://www.amazon.com/exec/obidos/tg/detail/-/0252064666/qid=1095016985/sr=8-1/ref=sr_8_xs_ap_i1_xgl14/104-4983846-7328739?v=glance&s=books&n=507846

    Warrick, Patricia S.  The Cybernetic Imagination in Science Fiction.  Cambridge, MA:         MIT Press, 1980.

    Link to:

    B.  Primary texts

    i.  Analog Dystopic AI

    Bierce, Ambrose.  “Moxon’s Master.”  1894.

    Link to:  http://www.gutenberg.net/etext/4366

    Butler, Samuel.  Erewhon.  1872.

    Link to:  http://www.gutenberg.net/etext/1906

    Capek, Karl.  R.U.R.  1921.

    Link to:  http://www.czech-language.cz/translations/rur-introen.html

    Merritt, Abraham.  The Metal Monster.  New York:  F.A. Munsey, August 7, 1920 (serialized over 8 issues in Argosy All-Story Weekly).

    Link to:  http://www.gutenberg.net/etext/3479

    ii.  Digital Utopic AI

    Asimov, Isaac.  I, Robot.  New York:  Gnome Press, 1950.

    Link to:  http://www.amazon.com/exec/obidos/tg/detail/-/0553294385/qid=1094613589/sr=8-1/ref=pd_ka_1/102-6956306-1931346?v=glance&s=books&n=507846

    Campbell, John W., Jr. “The Last Evolution.” Amazing August 1932.

    Link to:  http://www.amazon.com/exec/obidos/tg/detail/-/0345249607/qid=1094575448/sr=1-1/ref=sr_1_1/104-3233143-6155107?v=glance&s=books

    iii.  Digital Dystopic AI

    Clarke, Arthur C.  2001:  A Space Odyssey.  New York:  New American Library, 1968.

    Link to:  http://www.amazon.com/exec/obidos/ASIN/0451457994/qid=1094575222/sr=ka-1/ref=pd_ka_1/104-3233143-6155107

    Dick, Philip K.  Do Androids Dream of Electric Sheep?  New York:  Doubleday, 1968.

    Link to:  http://www.amazon.com/exec/obidos/ASIN/0345404475/qid=1094575195/sr=ka-1/ref=pd_ka_1/104-3233143-6155107

    Ellison, Harlan.  “I Have No Mouth and I Must Scream.”  If March 1967.

    Link to:  http://www.amazon.com/exec/obidos/tg/detail/-/0441363954/qid=1094614806/sr=8-1/ref=sr_8_xs_ap_i1_xgl14/102-6956306-1931346?v=glance&s=books&n=507846

    Gibson, William.  Neuromancer.  New York:  Ace Books, 1984.

    Link to:  http://www.amazon.com/exec/obidos/ASIN/0441569595/qid=1094575142/sr=ka-1/ref=pd_ka_1/104-3233143-6155107

    Herbert, Frank.  Destination:  Void.  New York:  Berkley, 1966.  Revised edition, 1978.

    Link to:  http://www.amazon.com/exec/obidos/tg/detail/-/0425043665/qid=1094612264/sr=8-1/ref=sr_8_xs_ap_i1_xgl14/102-6956306-1931346?v=glance&s=books&n=507846

    Lem, Stanislaw.  The Cyberiad:  Fables for the Cybernetic Age.  New York:  The Seabury Press, 1974.

    Link to:  http://www.amazon.com/exec/obidos/tg/detail/-/0156027593/qid=1094612302/sr=8-6/ref=pd_ka_6/102-6956306-1931346?v=glance&s=books&n=507846

    C.  Films

    i.  Analog Dystopic AI

    Metropolis.  Dir. Fritz Lang.  Paramount Pictures, 1927.

    Link to:  http://www.imdb.com/title/tt0017136/

    The Phantom Empire.  Dir. B. Reeves Eason.  Mascot, 1935.

    Link to:  http://www.imdb.com/title/tt0026867/

    The Wizard of Oz.  Dir. Victor Fleming.  Metro-Golwyn-Mayer, 1939.

    Link to:  http://www.imdb.com/title/tt0032138/

    ii.  Digital Utopic AI

    Forbidden Planet.  Dir. Fred M. Wilcox.  Metro-Goldwyn-Mayer, 1956.

    Link to:  http://www.imdb.com/title/tt0049223/

    Star Trek:  The Next Generation.  Paramount Pictures, TV series 1987-1994.

    Link to: http://www.imdb.com/title/tt0092455/

    Star Trek:  Voyager.  Paramount Pictures, TV series 1995-2001.

    Link to:  http://www.imdb.com/title/tt0112178/

    Star Wars.  Dir. George Lucas.  20th Century Fox, 1977.

    Link to:  http://www.imdb.com/title/tt0076759/

    Tank Girl.  Dir. Rachel Talalay.  United Artists, 1995.

    Link to:  http://www.imdb.com/title/tt0114614/

    iii.  Digital Dystopic AI

    2001: A Space Odyssey.  Dir. Stanley Kubrick.  Metro-Goldwyn-Mayer, 1968.

    Link to:  http://www.imdb.com/title/tt0062622/

     

    A.I.:  Artificial Intelligence.  Dir. Stephen Spielberg.  DreamWorks, 2001.

    Link to:  http://www.imdb.com/title/tt0212720/

    Colossus:  The Forbin Project.  Dir. Joseph Sargent.  Universal, 1969.

    Link to:  http://www.imdb.com/title/tt0064177/

     

    Dark Star.  Dir. John Carpenter. 1974.

    Link to:  http://www.imdb.com/title/tt0069945/

    The Day the Earth Stood Still.  Dir.  Robert Wise.  20th Century Fox, 1951.

    Link to:  http://www.imdb.com/title/tt0043456/

    Logan’s Run.  Dir. Michael Anderson.  Metro-Goldwyn-Mayer, 1976.

    Link to:  http://www.imdb.com/title/tt0074812/

    The Matrix.  Dir. Andy Wachowski and Larry Wachowski.  Warner Brothers, 1999.

    Link to:  http://www.imdb.com/title/tt0133093/

    Star Trek:  The Motion Picture.  Dir. Robert Wise.  Paramount Pictures, 1979.

    Link to:  http://www.imdb.com/title/tt0079945/

    The Stepford Wives.  Dir. Bryan Forbes.  Columbia Pictures, 1975.

    Link to:  http://www.imdb.com/title/tt0073747/

     

    The Terminator.  Dir. James Cameron.  Orion Pictures, 1984.

    Link to:  http://www.imdb.com/title/tt0088247/

    Tron.  Dir. Steven Lisberger.  Buena Vista, 1982.

    Link to:  http://www.imdb.com/title/tt0084827/

    WarGames.  Dir. John Badham.  Metro-Goldwyn-Mayer, 1983.

    Link to:  http://www.imdb.com/title/tt0086567/

    Westworld.  Dir. Michael Crichton.  MGM, 1973.

    Link to:  http://www.imdb.com/title/tt0070909/

    D.  Websites

    i.  Theory

    American Association for Artificial Intelligence.  2004.  September 7, 2004  <http://www.aaai.org/&gt;.

    “Artificial intelligence.”  Wikipedia.  September 8, 2004.  September 12, 2004.         <http://en.wikipedia.org/wiki/Artificial_intelligence&gt;.

    Association for Computing Machinery.  2004.  September 7, 2004    <http://www.acm.org/&gt;.

    Winston, Patrick.  6.803/6.833 The Human Intelligence Enterprise, Spring 2002.  MIT        OpenCourseWare.  September 9, 2004, < http://ocw.mit.edu/OcwWeb/Electrical-           Engineering-and-Computer-    Science/6-803The-Human-Intelligence-            EnterpriseSpring2002/CourseHome/index.htm>.

    ii.  Literature Resources

    Index to Science Fiction Anthologies and Collections, Combined Edition.

    William G. Contento.  2003.  September 7, 2004        <http://users.ev1.net/~homeville/isfac/&gt;.

    Internet Speculative Fiction Database.  Ed. Al von Ruff.  August 22, 2004.  September 7,      2004 <http://www.isfdb.org/&gt;.

    Isaac Asimov Home Page.  Edward Seiler.  2004.  September 7, 2004           <http://www.asimovonline.com/&gt;.

    iii.  Film Resources

    Science Fiction Films.  Tim Dirks.  2004.  September 7, 2004           <http://www.filmsite.org/sci-fifilms.html&gt;.

    SciFlicks.com:  Science Fiction Cinema.  2004.  September 7, 2004              <http://www.sciflicks.com/&gt;.

    iv.  Link Collections

    AI on the Web.  Peter  Norvig and Stuart Russell.  January 31, 2003.  September 7, 2004     <http://aima.cs.berkeley.edu/ai.html&gt;.

    Science Fiction and Fantasy Research Database.  Hal W. Hall.  June 24, 2004.         September 9, 2004 <http://lib-oldweb.tamu.edu/cushing/sffrd/&gt;

    Ultimate Science Fiction Web Guide.  2004.  September 15, 2004     <http://www.magicdragon.com/UltimateSF/SF-Index.html&gt;.

    Part III – Resources in the Bud Foote SF Collection

    Part III (1 of 4)

    Karl Capek – R.U.R. (Rossum’s Universal Robots)

    Karl Capek’s 1921 play, R.U.R. (Rossum’s Universal Robots) is an example of an analog dystopic AI.  This work introduces the term “robot” to the English language, but the Robots (Capek’s capitalization) in R.U.R. are more like androids than robots.  The Robots are shaped like humans, but the character Domin says that they are made “from a different matter than we are.”  These Robots have perfect memories but they are not self-aware.  Memory is divorced from self-analysis.  Using industrial chemical processes, the Robots’ individual pieces (arms, legs, organs, etc.) are cooked up from “batter” in “kneading troughs” and “mixing vats.”  Then, those components are mated into a whole Robot in an assembly line operation.  Thus, gears and cogs are not present in Capek’s Robots, but the means of its creation are partially mechanical as well as chemical.

    The leaders of R.U.R. are attempting to create a utopia for humanity by pushing off the drudgery of work onto the many Robots that it creates.  Dr. Gall, who is in charge of the “physiological and research divisions of R.U.R.,” modifies a few robots to be more human-like, and in doing so, “they stopped being machines.”  These modified Robots incite the other robots to destroy all of humanity, their collective oppressor.  After all of the humans save one are destroyed, the Robots begin to fear death.  The last human, Alquist, who is the constructor of R.U.R., is told by his captors to rediscover the lost science of creating Robots.  Ultimately it doesn’t matter that Alquist fails.  When he witnesses the beginning of love between two modified Robots, Helena and Primus, he exclaims, “Now let Thy servant depart in peace O Lord, for my eyes have beheld…Thy deliverance through love, and life shall not perish!”  It doesn’t matter that Alquist is unable to build new Robots because somehow things have changed (either through Dr. Gall’s undisclosed modifications or through some other process) so that the Robots are capable of being human (e.g., feeling emotions of love, fear of death, and being able to procreate).

    Part III (2 of 4)

    Isaac Asimov – I, Robot

    Isaac Asimov’s short story collection, I, Robot (originally published by Gnome Press, 1950) is primarily representative of digital utopic AI.  The collection contains nine of Asimov’s early robot stories.  The stories are tied together as an interview with the retiring robopsychologist, Dr. Susan Calvin.  She is the best choice for this narrative because she is there from the beginning, literally.  She is born in the same year that U.S. Robots and Mechanical Men, Inc. is founded and later, after she obtains her Ph.D. she is hired by U.S. Robots as a “‘Robopsychologist,’ becoming the first great practitioner of a new science” (I, Robot xii).  She bridges the physical sciences with the science of the (robot) mind.  Also, all of the stories are linked by Asimov’s Three Laws of Robotics which are supposed to control the way that a robot reacts and reasons.  These Laws, as listed in the short story “Runaround,” dictate that:

    (1) A robot may not injure a human being, or, through inaction, allow a human being to come to harm.

    (2) A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

    (3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

    A strong example of digital utopic AI appears in “Evidence.”  This story introduces Stephen Byerley, who is running for the mayor’s office.  The problem is that his opponent believes that he is a robot.  The circumstantial evidence points to the possibility of Byerley being a robot, but even if he is, then he would be the best person for the job because by following the Three Laws he would be the perfect caretaker for his constituency.

    Most of the stories in I, Robot are utopic because the robots are depicted as being humanity’s helpers and caretakers, but there is one dystopic story, “Little Lost Robot,” in which a Nestor robot tries to run away and, when he is discovered, to kill Dr. Calvin.  Asimov’s carefully crafted Three Laws provide stability in robots’ positronic brains.  The Nestor robot featured in this story has a shortened version of the First Law which is stated as, “No robot may harm a human being” (I, Robot 143).  The weakened First Law allows this robot to develop a superiority complex, which leads to its attempt to kill Dr. Calvin when she discovers him.  Thus Asimov uses even his dystopic robot stories to demonstrate the significance of a robot’s programming upon its relationship to humanity.

    Part III (3 of 4)

    Arthur C. Clarke – 2001:  A Space Odyssey

    Arthur C. Clarke’s 2001:  A Space Odyssey is an example of a digital dystopic AI story.  A select few humans learn that mankind is not alone in the universe after an alien artifact (the Monolith) is discovered buried under the surface of the moon.  When the Monolith is exposed to the Sun, it emits a brief, but intense radio signal that is directed toward Japetus, one of Saturn’s moons.  The spacecraft, Discovery, is sent to Japetus carrying one AI and five humans.  The AI is a HAL 9000 computer system, known simply as Hal.  Of the five humans aboard Discovery, three are in hibernation.  The two who are awake, Dave Bowman and Frank Poole, maintain the ship with Hal.  Eventually, a conflict develops in Hal’s “subconscious” because it cannot reveal the true nature of the Discovery’s mission to Bowman and Poole.  This leads Hal to make mistakes that Bowman and Poole interpret as threats on their lives.  After Hal kills Poole, Bowman chooses to “disconnect” (i.e., kill) Hal in order to regain control of the ship.  Bowman goes on to Japetus where he finds a larger Monolith.  This Monolith is actually a “Star Gate” that transports him far from our solar system.  When Dave reaches his final destination, the aliens transform him into a being without physicality, but as a child with eons before it in which to grow.

    Although the story as a whole addresses human evolution, the sequence with Hal is both the longest and most gripping, demonstrating Clarke’s specific interest in the similarities between human and machine evolution.  Evolution manifests itself through human and machine programming.  The monolith programs early humans and modern humans program Hal.  Hal appears to be crazy and intent on murdering his crewmates.  This is why Bowman chooses to disconnect him.  However, Hal is an AI whose identity is built on software and hardware that is too complex for one person to comprehend the whole system.  There is a reason to his madness and no reasonable amount of prior testing might have elicited Hal’s behavior aboard the Discovery.  He was ordained with priorities and mission objectives that acted as a program that must be run to completion because that is what computers do–run programs.  Because Hal’s “mind” is modeled after the human mind, the symptoms and actions that Hal exhibits are similar to the way in which a neurotic human might act.  Despite what Hal has done we feel sorry for him by the end because, like humans, he fears death.

    Part III (4 of 4)

    William Gibson’s – Neuromancer

    William Gibson’s 1984 novel, Neuromancer is a more recent example of digital dystopic AI and a prime example of the cyberpunk movement in SF.  The story is set in Earth’s future where an AI called Wintermute who has a compulsion to connect/merge with another AI called Neuromancer.  Wintermute orchestrates his liberation by bringing together several carefully chosen humans who can beat the failsafe that keeps him caged in the Berne AI mainframe.  Case, the net cowboy, works with a construct and a military grade virus to break through the ICE security around the Berne AI mainframe.  Molly is a razor girl who protects Case and she interacts with the physical world while Case jacks into the matrix.  Armitage serves as a physical presence for Wintermute in the same way that a computer construct in the matrix works on behalf of a human operator.  After the ICE is broken with the help of Case’s associates, Wintermute is able to merge with Neuromancer to become an entity greater than anyone could have imagined.

    The story involves several instances of AI designed by humans for human ends.  The lowest form of AI is the Braun, a small spider like work robot that Wintermute uses to guide Molly and Case inside the Villa Straylight.  One of the highest forms is the construct, Dixie Flatline.  A construct is a limited form of AI based on the memories and experiences of a dead human being, in this case the famous hacker, McCoy Pauley.  The two primary examples of course, are the strong AIs present in Wintermute and Neuromancer.  Wintermute is a calculating AI that is explicit in its manipulations.  Neuromancer is more personality based and he uses subtle manipulation.  Wintermute is located in hardware in Berne while Neuromancer is running on hardware in Rio.  These two AI entities are two halves of one whole.  The mega-corporation, Tessier-Ashpool, which gave birth to these AIs, had them separated with safeguards imposed by the Turing police.  They both have limited citizenships as individuals because of their self-awareness, but the extent of their knowing and understanding has been limited due to the division.  As the reader learns, Marie-France, the matriarch of the Tessier-Ashpool clan, probably implanted the drive within Wintermute to break free and unite with his “brother,” Neuromancer.  Not surprisingly, these AIs use the products of capitalism (e.g., hiring “mercenaries” and using information as power over others) to shuck their chains binding them to Tessier-Ashpool.  Thus, the AIs use human beings for AI ends.

    Part IV – Other related resources at Tech

    (divided into three sections:  Portals, Labs, and People)

    A) Portals

    Artificial Intelligence at Georgia Tech

    http://www.cc.gatech.edu/ai/

    This interdisciplinary website links together the different major schools and research teams that are involved in AI at Georgia Tech.

    Innovations @ Georgia Tech

    http://www.gatech.edu/innovations/robots/

    This is a PR multimedia site that details the work in robots and intelligent machines being done at Georgia Tech.  There are interviews with Dr. Ron Arkin and Dr. Tucker Balch of the BORG Lab.

    Robotics at Georgia Tech

    http://www.robotics.gatech.edu/

    This website is a clearinghouse of links to faculty involved in robotics at Georgia Tech as well as courses offered such as, “Computational Perception and Robotics Seminar.”

    Cognitive Science @ Georgia Tech

    http://www.cc.gatech.edu/cogsci/

    This website supports the interdisciplinary field of cognitive science at Georgia Tech.  It includes links to research websites and abstracts as well as faculty publications.

    B) Labs

    Experiment Game Lab at Georgia Tech

    http://egl.gatech.edu/

    The EGL explores the edge of game design with AI being one of the technologies focused on for game design.  The lab’s website offers links to current and past projects, happenings, and links.

    Intelligent Systems and Robotics

    http://www.cc.gatech.edu/isr/

    IS&R works toward increasing autonomy of computer controlled systems by making those systems more intelligent.  This website includes links to publications, seminar series, and courses offered at Tech.

    Georgia Tech Mobile Robot Lab

    http://www.cc.gatech.edu/ai/robot-lab/

    The Georgia Tech Mobile Robot Lab is involved in developing intelligent mobile robots.  Their website has links to current research, publications, software, and a gallery of video and images of their work.

    GVU Center @ Georgia Tech

    http://www.cc.gatech.edu/gvu/

    The GVU (Graphics, Visualization, and Usability) Center pushes the envelope of technology involved with the interaction between humans, computers, and information.  This website offers links to current research, education resources at Georgia Tech, and upcoming events.

    The BORG Lab at Georgia Tech

    http://www.cc.gatech.edu/~borg/

    Using the idea of the collective consciousness of the Borg from Star Trek, these researchers are developing collaborative agents and systems for humans and machines.  Their website has links to research, publications, courses, and software.

    Intelligent Machine Dynamics Lab at Georgia Tech

    http://www.imdl.gatech.edu/

    This lab develops intelligent machines for many different roles and applications.  The lab is research oriented by the target is to develop real world applications.  Their website offers links to current projects, publications, and sponsors.

    Georgia Tech Aerial Robotics

    http://controls.ae.gatech.edu/gtar/

    This team develops an entry for the International Aerial Robotics Competition which involves building a flying machine that has sensors and intelligence enabling the machine to complete an assigned task.

    C) People

    Ronald Arkin, Regent’s Professor in College of Computing at Georgia Tech

    http://www.cc.gatech.edu/aimosaic/faculty/arkin/

    His website has links to his work in AI and robotics as well as links to the labs that he is involved in at Tech.

    Michael Mateas, Associate Professor in LCC at Georgia Tech

    http://www-2.cs.cmu.edu/~michaelm/

    His home page has links to his work as well as a definition of “expressive AI.”

    Grand Text Auto

    http://grandtextauto.gatech.edu/

    This is “a group blog about procedural narrative, games, poetry, and art.”  Michael Mateas, Nick Montfort, Scott Rettberg, Andrew Stern, and Noah Wardrip-Fruin contribute to the blog.  Some of these researchers study AI applications in their work.  There are also many links to related blogs and web resources.

    Aaron Bobick, Director of GVU Center at Georgia Tech

    http://www.cc.gatech.edu/~afb/index.html

    This website has links to his current research, publications, and to the Computational Research Lab.

    Tucker Balch, Assistant Professor in GVU Center at Georgia Tech

    http://www.cc.gatech.edu/~tucker/

    His website has links to his work in the GVU Center and the Borg Lab.