Picture_56

Sitting at the computer

Applying a metaphor of ‘ecology’, with its connotations of the natural environment, to thinking about the Internet, provides a framework to conceptualise the Internet as a diverse  interconnected system. It also points to the necessity of considering the Internet, and indeed technologies in general, as not existing in a vacuum, but rather as part of the social, political and economic forces in which the technology is embedded. It also makes me think of the Internet as evolutionary, not static, and the nature of the transactions that take place over the network.

Additionally, thinking of information ecologies reinforces the idea of the power of the knowledge, or more specifically the ascendancy that control of information provides, in an environment made up of information. This can be illustrated in the gap between information rich and poor countries and the implications that this gap has for social and economic development, and also the effects that this unbalance has on the relationship between those with information power and those without.

Definitely looking at the wider picture when considering this concept…

Further use of metaphor and the Internet…

rhizome

I have come across the metaphor of the ‘rhizome’ and how it applies to the Internet and the presentation of knowledge. A rhizome in nature is a plant that has the characteristics  of being anti genealogical, without center, and able to be connected with any other part of its structure. If any part of the rhizome is broken up it will grow again. In the paper “Hypertextuality” by Sergo Cicconi, the author provides a view of the application of the metaphor of the rhizome (citing Deleuze and Guattari) to the characteristics of hypertext.

Cicconi relates the history of the attitudes towards the organisation of Western knowledge. The encyclopedic categorising of knowledge has favoured the creation of hierarchies as an organisational system, likened to the structure of a tree (The Tree of Knowledge). The contrast is made with the nature of hypertext which allows the creation of a decentralised, infinitely expandable network of information which allows a “multiplicity of paths and the constant remodeling of its own structures and contents.” (Cicconi, 199). In this way, the more relevant metaphor to apply to the hypertextual representation of knowledge is that of the rhizome.

Understanding through metaphor…

The Internet is a plant! (Ok, I need to go lie down now, my head hurts… signing out)

For evaluation purposes I will be using an article that I used as part of my concepts assignment. The details are:

URL: http://www.forbes.com/free_forbes/2007/0507/176.html

Author: Sherry Turkle

Title: “Can You Hear Me Now?”

Appears on Forbes.com on the 05/07/07

(Note: I used aspects of this article to address the concept “Human Computer Interfaces”)

Annotation For this Site/Article

This article appeared on Forbes.com, a website whose content is aimed at mainly providing information related with finance and business issues. From the web site itself it is not clear if this article was exclusive to the web site, or is a transcription of an article in the parent company magazine publication Forbes, although the article is found in the ‘Special Report’ section of the web site. Brief information on the author is provided at the end of the article saying that “Sherry Turkle is professor of the social studies of science and technology at mit”. A web search on Sherry Turkle shows, from an MIT profile page, that the author is also “the current director of current director of the MIT Initiative on Technology and Self, a center of research and reflection on the evolving connections between people and artifacts.”

The main ideas of this article are the questioning of aspects of identity in an increasingly networked world, where people are never out of reach of contact through the use of technology. Turkle considers the placement of the self when the individual subject is “wired into society through technology” and how statements such as “I m on the Web” or “I am on my cell” speak about a new state of the self. Other social aspects are considered, such as privacy concerns and blending of the real and virtual world.

The article has the tone of a ‘think-piece’ in that the language is not overly academic or the content dense with theory, even though the author is university professor. Also, anecdotal evidence for the author’s assertions are conveyed in the text, although there are no references provided on the web site, reinforcing the tone of a personal (though very well informed) point of view. The business and financial context of the web site seems a little at odds with the subject matter addressed in the article. The article is headed as a ‘Special Report’ which gives the article a context that falls outside the usual financial reportage of the majority of the web site.

Future Use

Comparing apples with oranges here, as I have used a different source for this task rather than leading on from the source in the previous task! For future use the detailed annotation would be more useful to refer back to, as the source has been evaluated and the a brief content description is provided. The only issue here would be how this annotation is stored or organised in such a way that would provide easy retrieval (maybe a Diigo annotation would do).

For external users, just looking at the URL of this information would not provide enough information, or could be potentially misleading in the user evaluation, because the context is a financial web site. Further supporting information, particularly author, would be more useful here to an external user.

So, I am looking at this task from the perspective of having already completed the Concepts assignment.

A little about my organisational methods during the assignment –

At some stages, while furiously trying to complete the assignment I had something like three instances of Firefox running with at least 25 open tabs in each window. Chaotic to say the least. How to organise such information as URL, author, institution and a summary?

The first step for me is to organise relevant searches by using the browsers ‘bookmark’ function. Searches are then grouped into folders and tagged with keywords for later retrieval. Some sites, however, can be ambiguous in retrieval when returning to view the bookmarks, especially if the website is described in a hard-to-decipher URL, or if my tagging is not up to scratch. I’m also afraid to admit that the collection of the majority of annotated information that I require for bibliography purposes comes after the fact of retrieving and using the information. I think we’ve all been there?

Continuing on from my keen ability to only come across something worthy ‘after the fact’ I have been experimenting with Diigo a Web highlighter, sticky notes and annotation tool. Here is an Diigo annotated extract of information that has been produced from a particular website for a source that I used in the concepts assignment – “Human Computer Interfaces”.

diigo_annotation_extract

This could have been useful, especially paired with the Diigo bookmarking feature as a means of information location. I especially like the ability to add in and share your own comments via the sticky note function.

Next time I just know I will be more organised!

Ok, for this task I am going to use the term “human computer interfaces” as a way to get some more interesting results (forgive my previous diversion) and to tie in with my Concepts assignment.

Entering just the keywords as they are into Google  – “human computer interfaces”, results in 26,900,000 hits.

Using the AND operator as in “human AND computer AND interfaces” returns around half the results at 13,600,000. I suppose this is less because the operator is asking for all the terms to be taken into account when retrieving records.

Conversely, using the OR operator, as in “human OR computer OR interfaces” broadens the search to return a huge 1,650,000,000 Google hits! Winner!

To get the results that are most relevant here I would use “human AND computer AND interfaces”.

Other options to narrow the search would be to “phrase” the keywords or to use a “+” sign in front of a keyword to tell the search engine to match the word exactly as typed.

As for limiting a search to university sources only, I am a little unsure of how to achieve this. I have read on the Webct forums that including “edu” in the keywords can act to as a limiter as .edu is the domain for educational institutions.

My tactic for the Concepts assignment was to extensively use Google Scholar as a starting point. If results point to an academic online journal that is restricted by a cost subscription, access through Gecko database search means that you are going through a Curtin University gateway, and as such no fees apply. Very handy.

The search term is…

“retro gaming”

Entered into Google the first result is:

“Retrogaming – Wikipedia, the free encyclopedia”

The recorded amount of hits is 14,500,000.

I don’t think I trust Copernic Basic but here goes…

The first hit is the same as the Google search result. The recorded amount of hits is 52. And that’s across 13 search engines? I can’t seem to find anymore stats on the amount of hits in Copernic other than 52 hits. Anyone know anything about this? (I shake my fist at the optional task!)

In the interest of science…

Using  metasearch engine SurfWax, search of term “retro gaming” returns as first result:

“Call hitmaker cheesy, but he’s ‘very rich’.”

Further investigation of the link reveals it is a profile of the record producer David Foster from CNN.com’s entertainment page. Doing a find on the page finds the word “retro” as part of a quote attributed to Foster which goes:

“I just did the new Seal album, and it’s an album of ’60s retro [soul] music. … It’s amazing.”

Nowhere on the page is the term “gaming” to be found.

The SurfWax amount of recorded hits is a massive 137,000,000.

Results

Top five results with Google Search:

Retrogaming – Wikipedia, the free encyclopedia

ClassicGaming – the home of classic gaming on the net

Retro Gaming – racketboy.com

Retro Games – Online Game Site Supporting all retro, classic and …

DesktopGaming Has Killer Retro Gaming Wallpapers – Lifehacker

And SurfWax:

Call hitmaker cheesy, but he’s ‘very rich’

Retro Remakes : Classic Gaming For The Next Generation

ClassicGaming – the home of classic gaming on the net

High-tech gifts in low-tech packages

Retro Games – Games at Miniclip.com – Play Free Games

Differences

The search term is rather broad so I think I am going to run into trouble in the later parts of this task, especially with academic sources (I might be surprised). In terms of number of hit returns the metasearch returns a greater amount by  a long way. In terms of content both searches are very similar, with the exception of the Surfwax result regarding the producer of Seal’s “retro” album (weird). Also, I wonder why Surfwax doesn’t return the Wikipedia result in its first five results, yet returns two pages that are sourced from CNN’s entertainment and technology pages?

At first glance, the google search provides the more promising results, but I got a sneaking suspecion my search term is limiting me here….

Of the list of programs provided for this task I have used or had experience with the majority of them. A search Manager/Combiner is a new concept to me so I have downloaded Copernic Agent from this page.The version I downloaded is the ‘Basic’ version, which is free and does not come with features that are offered in the ‘Personal’ or ‘Professinal’ versions, both of which involve a cost.

Some features that the Basic version does not have that are present in the other versions are accessing hidden information, summary and analysis of results and tracking changes in web page contents.

The interface is fairly easy to navigate, it definitely looks and behaves like a program (like Outlook) rather than your typical web browser. What I enjoy here most is the display of previous search results in folders at the top of the screen for easy access, and the ability to organise your searches into folders. I am curious about the ‘Analyse Results’ function that, sadly is only available in the commercial versions. It appears that you can access further information such as extracting key concepts and sentences to form summaries of pages. This could have been handy for my Concepts assignment.

I will cover search results in my next post, however the Copernic help says that the basic version gives access to 7 categories which are the Web, newsgroups, Buy computer hardware, Buy software, Buy electronics…..ok, I get why this task was an optional one now!

The commercial version looks powerful with access to 125 categories and 1200 search engines. For now though, I wonder what the value in using this basic version of Copernic over using a metasearch engine like SurfWax or Dogpile

Download – Media Player

A program that I have downloaded for playing music and have thoroughly enjoyed is MediaMonkey. I like the fact that it is just about playing and organising music files, as opposed to some media players that try to do everything and dont do it very well. Also, the auto organise feature comes in very handy, especially if you have files spread out across your hard drive which are tagged inconsistently.

Download… nothing?

Will Web applications be a replacement to desktop applications? Instead of downloading, will the Web serve as the host of applications available from ANY desktop? I have come across a site called glide which is billed as ‘The Complete Mobile Desktop Solution’ and offers 10 gigabytes of free storage. As yet, I have not had a chance to experiment with this site but the concept is an intriguing one.

As communications are increasingly conducted over the Web I think the lines are blurring between the desktop and Web applications. However, if everything was to be held and accessed through the Web I think existing concerns regarding privacy and the storage of personal information would be raised even higher. Something to keep an eye on for future developments.

Blogs

Kudos NET11. If  I hadn’t taken this course I might not have ever taken the plunge into blogging at all.

Some blogs that I currently like:

Music: http://obscuresound.com/

A music blog that featuring mp3 downloads, interviews, album reviews. As the name suggests, ‘different’ sounds are explored, so this blog is a way of finding out about some music that might otherwise go unnoticed.

Art: http://www.artnewsblog.com/

Daily news stories from the art world

Current affairs discussion: http://larvatusprodeo.net/

Australian group blog discussing politics, sociology, culture…

Technical Things: http://www.techcrunch.com/

Profiles of new Internet products and companies

Diversity and opinion to me make up the fundamental aspects of blogging. There is the potential for debate and sharing of information. Reading a good blog inspires you to give it a try for yourself – and it’s really not that hard.

Personally, I enjoy the act of putting down words in a blog as a method of formulating and retaining lines of thought and experience. I have checked out Tumblr recently which looks like a good method of quickly posting text, photos, links and quotes amongst other things. My intention is to use Tumblr as a kind of ‘scribble’ blog, perhaps not as fully formed or strictly structured like this blog, but more so to capture random thoughts, insights, useful links. I think this is going to come in handy for when I start my next units of study as a way of keeping things close and bringing concepts together. So, thanks NET11 for getting me blogging!

Web 2.0

I thought Web 2.0 was all about gradients and reflective buttons and silly/catchy Web site names, e.g. weebly, Thoof, Yoono, Diigo, ooVoo,  the list goes on here….and here.

But beyond the popular aesthetic choices and the employment of marketing agencies thinking up buzzwords, the characteristics of Web 2.0 take in the web as a platform, proliferation of interactivity and connectivity, shared content, online social networking, blogs, and wikis.

I  think that the the need to define or categorise developments, points to the evolutionary nature of the Web. The Web is not static, and its evolution is tied to not only technological advances, but social changes in the way the technology is used.

A thought provoking counter to hype surrounding  web 2.o is provided in the articles published in First Monday ‘Special Issue: Critical Perspectives on Web 2.0’ and can be found here:

http://www.uic.edu/htbin/cgiwrap/bin/ojs/index.php/fm/issue/view/263

These articles present a necessary criticism of the rhetoric of Web 2.0. and addresses issues of privacy concerns (especially as so much data related to individuals is spread across social networks) and the corporatisation of online social and collaborative space. Of a particular concern here is the concept of  ‘participatory surveillance’ that online social networking can enable.

The opportunity for greater collaboration brings about greater opportunity for commercialisation?

Ok, Here it is, a link to my first Web page…

http://members.optusnet.com.au/akw81/

I have uploaded my first web page to the web space provided by my ISP, Optus. I did the upload via the FTP client FileZilla.

Also, after the initial upload I submitted the address of my page to http://browsershots.org/ which is an online service that checks the appearance of a web page across different browsers. Seemed to be all ok!

This task involved the creation of a basic Web page by following the the instruction provided by Joe Barta titled ‘So, you want to make a Web Page!’ which can be found here.

For a text editor I downloaded the free version of NoteTab light which can be found here.

Firstly, I enjoyed this task and the tutorial was easy to follow for a novice. I had an understanding of what HTML was but had not learnt the conventions, and had certainly not put it into practice. For my first Web page, I followed the instuction of the turorial with regards what tags to use and then adapted the content – e.g. the body text, the pictures and the links.

It was valuable during the creation of the web page to open the file in a browser window as a means to check progress, to spot any errors, or to make corrections relating to design decisions. Also, I was taken with how easy it is to place links to other sites and documents. It was satisfying to see the end result of this exercise. Even though the page is not about anything particularly interesting it felt good to be able to look ‘under the hood’ so to speak, at the stuff that makes up the WWW and to have an insight in how to go about creating pages.

So which do I prefer? HTML or blogging?

Well, I like the control and satisfaction that comes from coding HTML. The problem is that there is a lot to learn with HTML. In this tutorial I have only touched the surface and am keen to learn more, but the standards, conventions and language can seem prohibitive and time consuming for a beginner. To contrast this with blogs, in particular when I started this blog, it was fairly intuitive in how to post all sorts of content right from the start and easy to use. Perhaps the line between blog and website is blurring anyway? I tend to associate blogging with something quite personal, in an ‘online diary’ sort of way although I am having a rethink on this position. To me, HTML provides the enabling structure and is more related to choices of presentation and design.

I cant choose. Cant I just say I like them both?:-)

Some Links…

HTML.net

HTML tutorial of all flavours with additional CSS. Yum!

Have Blogs Killed Conventional Websites?

Here the author makes the case for the advantages of blogs over websites including financial reasons, ease of use, media visibility, e.t.c.

Hypertext – Martin Ryder (links page)

Thought I would throw in this page which provides a resource of links relating to hypertext, including history and hypertext theory