Technology analysis of the latest gadgets, consoles, and computer architectures.

Saturday, October 04, 2008

The New Age of Desktop Computing

From Google apps to Sparkpeople to Facebook apps to Hulu, web applications are taking a more pervasive role in modern desktop computing. We are now able to bank online (uwcu.org), manage a budget and organize transactions (mint.com), and play a game of Go with friends (Facebook.com) without having to rely on anything but an internet browser running on any computer anywhere with an internet connection. It has become so commonplace now that many people may not realize that what used to require an investment of some sort (be it time, money, computer resources, etc.) now require mere seconds to access and use. At what cost will the increased reliance on web applications be? How will this affect the way we use computers in the future? Although I could probably write a book on these topics alone, my goal with this blog posting will be to try and set certain guidelines for web application developers (and users) to keep in mind to ensure that the users' needs are met while capitalizing on the future of desktop computing.

From the 1980s when desktop computing exploded until less than a few years ago, a traditional desktop computing experience would have consisted of a Windows or Mac OS-based machine bundled with applications that serve a variety of tasks, such as productivity (Microsoft Word and Excel), entertainment (Winamp, Windows Media Player, DVD Player), creativity (iPhoto, Fruity Loops), and games (Unreal Tournament, Quake 3, Warcraft 3). Much of the time, many of these applications were bundled with the OS or purchased at the store. Once the Internet became more accessible to computer users, programs were commonly downloaded and installed; I probably have at least 4 CDs worth of downloads, and many more burned CDs of old Linux distributions and floppies of even older backups of downloads before I had a CD burner. It sure was convenient to have enough space to install all of these applications on my computer; I shudder thinking back to when I had to delete an application to make room for another on the 40 MB hard drive of our Apple Mac LC II.



But then, in the mid-1990s, the Internet became more affordable and widespread, and content distribution online provided a means to reduce the acquisition time of software, music, and on-demand video from potentially days, weeks, and greater to hours, minutes, seconds, and milliseconds. AOL provided a new avenue for interactivity and communication, providing channels with initially text-based and then audio and video-based content, and giving millions of people access to e-mail, chat rooms, and instant messaging. E-mail started moving away from the desktop; instead of Microsoft providing access to one's e-mail through desktop applications such as Outlook, AOL started providing e-mail within its all-encompassing Internet application. Many other companies were providing e-mail services during the same time period, but AOL was the one that began the trend of de-emphasizing the importance of the desktop application. Yes, for much of its life, AOL was a desktop application, but with AOL it no longer mattered where you, as a user, were when you connected. You could go to your friend's house and, assuming he or she had a computer, a modem, and a phone line, you could connect to AOL, check your e-mail, browse your favorite "channels", and chat with your friends.

But alas, things changed once more. People realized that there was a world outside of AOL, and that avenue was provided to you by Internet Explorer and Netscape. Schools started outfitting labs of Internet-enabled computers, and instead of AOL people now had Yahoo Mail and Hotmail accounts. My dad listened to the Voice of America using Real Player over the Internet; teachers would listen to music online during Study Hour. E-mail and directory services became the first true web applications, followed by portal pages (such as my.yahoo.com) and search engines. Soon, news and media outlets were springing up online; Launch Media went from sending music videos and interviews bundled around a Flash interface in CD form in the mail to serving on-demand music videos at launch.com. And at this point in time we started seeing more "enablers", or desktop applications, that served the sole purpose of decoding packets from audio, video, and textual services off the Internet. Real Player was extremely popular for streaming audio and video, Winamp for shoutcast [online radio station] streams, and PointCast for news. Macromedia Flash was mostly being used for animation in desktop applications, just as Adobe Acrobat Reader was used for universally viewing documents.

Then came the dotcom boom, or, in more desktop-centric terms, the migration of transactional-based services (and applications) onto the Internet. Along with the added power of the search engine, consumers could now go onto the Internet to learn about new products and become more exposed to competitive pricing than ever before. Most importantly, moving transactions online meant that consumers could buy anything from anywhere instantly (and wait 5-10 days for the item to arrive at your door).

Finally, the desktop applications started to migrate over to the cloud, one-by-one. AOL moved e-mail and eventually their content to aol.com. Microsoft moved Money to money.msn.com (see My Money). Intuit TurboTax moved to turbotax.intuit.com. More and more people are now relying on Google, Yahoo, and Microsoft to retain their e-mail and handle virus and spam detection. Even virus scanning has moved online. Although originally initiated by companies touting the "Web Operating System" or "Web Desktop" (and you've probably noticed by now that I regularly use an online encyclopedia and dictionary), we are now seeing popular productivity apps brought to the net by the likes of Google and Microsoft, music players by the likes of Last.fm and Pandora, and video players by the likes of Youtube and Hulu. There's even online photo editing and video editing apps, online backup solutions, and [ajax]Windows.

And last but not least, chat and chat rooms have evolved into blogs, forums, and text article comments, leading to the social networking revolution. Despite the fact that ajaxWindows looks to be one of the closest, most comprehensive replicas of the desktop computing experience, I would argue that Facebook, Google, and Yahoo are poised to become the face of the new age of desktop computing.

I'm going to start with Facebook. Facebook caught be by surprise when they announced the Facebook Application Platform. I absolutely reviled by Facebook when I first heard people talking about it. It seemed to me like a dating service mashed together with college kids who needed another avenue to waste time talking about people and events (and how drunk they got last night). Eventually (almost 3 years after its advent), I gave in, not because I was bored and needed something to do (trust me, the UW School of Engineering kept me plenty busy), but because I had seen the nice, clean interface (thousands of times better than MySpace), I was concerned about privacy, and I wanted a better way to stay in touch with all of my friends. Most of all, Facebook provided me a means of having a visual representation of my social graph and establishing contact with friends (and friends of friends). So I created my profile, and due to lack of time my activity remained relatively low until the applications started coming in. Photos, Events, the Wall and the Inbox became incredibly useful for sharing and viewing photos from events, inviting people to events or having access to a live birthday calendar, and exchanging messages, both publicly and privately. Then the Application Platform was created, the applications started rushing in, both from Facebook and third-parties, and I found myself using Facebook as a tool to share photos, notes, and videos privately with my friends from my trip to the Philippines. Just recently, Facebook has put on a new look, and the platform is starting to look more and more like a desktop computing environment. Just take a look at Facebook's new interface:



Notice anything familiar? I see a start menu, quick launch, and task bar at the bottom and a menu bar at the top with a search box at the upper right. Scary, huh? Well, maybe it's not exactly frightening the way that organized religion may destroy the world, but it is pretty amazing how close the mainstream has come to a web operating system. With the appropriate host operating system (including the browser), Facebook could easily become the new Windows. No wonder Microsoft made such a significant investment in Facebook...

Yahoo is next in line. This may surprise some people, considering the bad press Yahoo has been getting during the past year or two. But just keep in mind that it wasn't too long ago that Yahoo was king on the web. Even today, Yahoo still has much content and services that rival Google, Microsoft, and AOL. Being the original directory service and providing the first mainstream web desktop to the masses (my.yahoo.com), it shouldn't be a surprise that Yahoo is poised to be king in content, assuming management succeeds at merging all acquired and organic business units successfully around a solid ad platform. And now, they have made the next move, arguably one of the most important decisions since Facebook created the Application Platform: Yahoo has opened up their platform to developers. Essentially, Yahoo has done to the net what it has been trying to do to its business units: create one common platform and one common interface for users. With this move, Yahoo will migrate from being a suppressed voice in social network to a much more prominent one. How else to best compete with Google and Facebook than to more tightly integrate their "10 billion" users of Mail and Messenger to other Yahoo platform applications, websites, advertisers, and other future content producers. In fact, this is a move that makes Google look like they're falling behind.

But Google does have one thing above both Yahoo and Facebook: an ad platform that is extremely effective, and web applications that provide a great web computing experience.

Google has been incredibly active, both organically and through acquisitions, in replacing desktop applications and suites with web-based equivalents. In fact, Google now sells a $50/year/user online productivity suite to businesses that provides e-mail, calendaring, chat, word processing, spreadsheet, and video applications with 10GB of online storage (and 25GB for e-mail). And Google has put the power of Keyhole's desktop application for accessing Earth satellite imagery to maps and has become one of the top destinations for directional-based services.

Google understands better than both Facebook and Yahoo (and almost the entire web community) that a new age of desktop computing is brewing, and Google wants to play an integral part in that. They understand why people have hesitated to adopt these technologies, and are taking steps to ensure more widespread adoption. They have been able to offer commonly used desktop tools to Google users' fingertips. They have taken advantage of the most powerful current web technologies (Flash, AJAX, Javascript) to provide a fast and reliable user experience with their applications. And they are trying to be everywhere, going as far as creating a universal open source phone OS and pushing for widespread adoption to increase usage of the web (and, in turn, Google) in more and more places. And innovation is key, adding APIs to as many Google applications and services as possible to encourage development and usage in many applications and devices.

Finally, Google has an overall system design perspective which is very comprehensive and relatively unique to the web landscape. Google understands that the final step in widespread adoption is to demonstrate to users that the personal computing experience that everyone is used to with their desktop or laptop can be effectively replaced with online applications and services, and at the convenience of accessing and producing anywhere and everywhere, at any time, with any device. As much of a dream this may be for some people and businesses (i.e. the Cloud Computing Initiative), it is somewhat comforting to know that one day the device in your hand need only be powerful enough to provide your user interface and a quality user experience with the services (be it voice, video, etc.) one wishes to partake in.

And to provide this experience, in addition to banking on web technologies, Google has moved onto the desktop, providing a toolbar and applications that create a better overall experience. Google Desktop provided a means to search desktop files and e-mails along with web content all in one place. Google Toolbar provided a one-stop shop for search and Google apps / services. And finally, Google Chrome has provided a fast, reliable experience with Google apps.

Google Chrome should not be looked at lightly. Although the browser is in big part a result from the development of Android [Mobile OS], in a day and age when the browser is becoming more and more the avenue to the new age of desktop computing, Chrome poises Google to become an extremely powerful adversary to Microsoft. This leads me to the main point I've been trying to make (and the reason I started writing this blog posting).

Moving the desktop computing experience isn't as simple as providing the same applications in an equivalent form onto the web. When computing power has come so far that all modern computers can handle most common tasks without any noticeable performance difference to computers 2-4 years old, it is understandable that the desktop computing experience has begun to move more rapidly to distributed, remote systems (such as Google) or to slower, smaller form-factor and/or low-power devices such as the EeePC, the Nokia N800, or the iPhone. But the major difference between moving the applications between other, slower devices and to remote systems is that the source of the speed degradation affects the user experience significantly. Moving applications from fast, local compute systems to slow, local compute systems is not the same as moving applications from fast, local compute systems to fast, remote compute systems, because response time is key. With slow, local compute systems, you lose compute power, but you don't necessarily lose response time, so the user experience only degrades slightly, and can be managed. With fast, remote compute systems, unless your task requires a significant amount of compute cycles (which most modern desktop compute tasks do not), the increased response time will outweigh the gain in performance, and the user experience will remain unbearable for the unforeseeable future. So I would argue that the only true migration path to fast, remote compute systems is to provide techniques to make the user experience as close as possible to the desktop equivalent, and the only way to properly do that is to reduce the amount of requests sent to the remote server. This is the main reason Apple had to create an SDK for the iPhone; the users did not find the long mobile network access time acceptable for the web-based applications they wanted to use. And this is why Google ultimately needs Chrome: to continue adding enhancements to both the web technologies and the browser to ultimately keep response time low for the applications users want to use.

Saturday, September 20, 2008

Celio Redfly and its Reminiscence Effect

The Celio Redfly is a netbook-shaped Windows Mobile extension device that provides a Windows Mobile smartphone user an 8" screen, full keyboard, and touchpad in a light-weight, portable form-factor.



This device lacks a mass storage device and notable processing capabilities; instead, it relies on a Bluetooth or USB connection to a Windows Mobile smartphone to provide the processing power and storage. Thie device essentially provides a Windows Mobile user with a larger screen and keyboard, similar to how a docking station provides a laptop user with access to desktop accessories.

The true qualm I have with this device, however, is that it sounds fairly similar to a design I created close to 5 years ago. I will have to consult certain parties to find out if the drawings are still available somewhere, but the basic jist is the following:

Imagine having a tablet PC but without a mass storage device or CPU; instead, it has enough space and a dock connector to house and interface with a PDA. I envisioned that one day, one would prefer to carry around his/her personal computer in a pocket or purse, hence providing immediate access to personal files and e-mail, etc. As PDA users can relate, although it is convenient to carry a personal device in one's pocket, it is very encombersome to attempt to use such a device for reading literature, browsing the web, or writing a story. As a result, one may choose to carry a laptop around, which adds significant weight to one's baggage. Instead, it would be great to just carry around the PDA only when weight and space is an issue, and use the PDA-extension when a larger screen, keyboard, and additional ports are needed.

Hopefully soon I will be able to post either the original drawing or a new mock-up of it. If anyone finds a prior art or patent on this, let me know :D

Sunday, June 01, 2008

Moving and Expanding an EXT2/3 Partition

Synopsis: Need to move and resize an EXT3 partition.

System Configuration: Gentoo Linux x86_64 2.6.20

Raid-1 Mirror with 60GB of unallocated space at start of drive and a 100GB EXT3 partition.

Brief Backstory: After setting up my media PC with Windows Vista but before my extended business trip to the Philippines, I decided to convert my desktop to a Linux server. When I first purchased this machine over 3 years ago (see historical blog postings for details), I had an extra 40GB hard drive installed that I performed an install of 64-bit Gentoo Linux on. Well, 3 years later, I decided to ditch Windows and use Linux on this machine exclusively. This required a somewhat painful upgrade process that I have yet to finish; I managed to update the kernel and some system utilities before stopping.

Anyhow, one of the changes I made to my system after using Linux exclusively was picking up a second 160GB SATA hard drive so that I could have some redundancy for my data. Now that I have returned from the Philippines and have also accumulated another 200GB of TV content, I am feeling the need to A) add a lot more capacity to my server and B) add a backup medium (aka large external hard drive) that I can rsync to. But in the meantime, I wanted to test the server to see how well it would hold up and also allow me access to my data while I was gone. The Linux server proved to be a success; it ran for over 215 days without a crash. Eventually I had to reboot it to prepare for the partition move and resize, but the Linux system is my most stable between the 3 OSs I currently use daily (Windows XP/Vista, Mac OS X, and Linux).

So, having proven to myself that Linux will be a fine replacement for my desktop OS and will function very well for my future server (and being only 1GB from filling the 100GB data/music partition), I decided to bite the bullet and try to move and resize my EXT3 data partition. I rarely had problems moving NTFS and FAT partitions in the past, so this should be a snap! I have much of my music backed up at mp3tunes.com and a burned CD of my data folder from the past year or so which should be good enough. (In retrospect, I should have done one final backup of everything before proceeding, but more on that later.)

Plot:
I referenced this guide to start. I would definitely come back to this guide in the future if I need to expand an EXT partition to the right (or shrink a partition), but it was a good reference nonetheless. I used the following command (from Page 2) to convert my EXT3 partition to EXT2 (i.e. remove the journal) and performed an fsck/e2fsck to check the filesystem before proceeding.

fsck -n /dev/sda1
tune2fs -O ^has_journal /dev/sda1
e2fsck -n /dev/sda1

I rebooting my computer and loaded Paragon Partition Manager v5 since it was able to recognize my RAID and the EXT3 (now EXT2) partition. Paragon reported no warnings or errors moving the partition to the beginning of the drive and resizing the partition. However, after booting into Linux again and trying to perform a file system check, e2fsck reported the following warning messages repeatedly (acting on different Groups and Blocks):

Pass 1: Checking inodes, blocks, and sizes
Group 0's inode table at 13 conflicts with some other fs block.
Relocate? no


Illegal block #13310 (4294967295) in inode 7. IGNORED.

I used the following command to tell e2fsck to relocate these filesystem structures, providing more console output and status updates. Without the -C 0 and -v, I found that e2fsck would run for days without ending; only when I added "-C 0" (which displays the progress bar) did I notice that e2fsck would loop forever (due to the failure described in the next paragraph).

e2fsck -C 0 -y -v /dev/sda1

After the above warning messages disappear, once e2fsck appears to finally be performing the relocations, the following error message appears repeatedly (acting on different Groups and Blocks):

Error allocating 512 contiguous block(s) in block group 729 for inode table: Could not allocate block in ext2 filesystem

After displaying the last Error, e2fsck will start over again, asking to relocate the same filesystem structures again.

At this point, I found myself fairly frustrated; if only I had broken the RAID mirror and/or rsync'd my partition before this had happened, I could have either copied from one drive to the other and rebuilt the RAID or created a new filesystem and copied the data back. But at the same time, this was a good chance to me to learn more about EXT2 filesystems. I did enjoy that 500-level OS class I took at UW-Madison a few years back, and I am using Linux...

(I apologize for missing any links that I have visited while researching this problem; I will do my best to reference links at the end of this article.)

It was time to update e2fsprogs. Gentoo makes this really easy. I'm not sure what version I had been using, but for the remainder of my work I have been using v1.40.8. Regardless of whether you are using Gentoo or not, I advise downloading the tarball of e2fsprogs, because manually compiling and modifying the source code was ultimately how I was able to restore my filesystem.

I started by using the tool dumpe2fs, which provided me with the superblock information along with detected groups, etc.

dumpe2fs /dev/sda1

If that command completed its dump successfully, you'll know that either your primary or one of your backup superblocks is not corrupted. It will also provide you with useful information such as block size and blocks/group, which will be important when you need to calculate the location on disk to perform dumps of your raw disk.

Next, I used the tool debugfs, which comes with e2fsprogs. This is a very useful tool because it allows you to perform normal filesystem access commands against your raw drive without mounting. I exclusively used 'ls' (and 'cd' when I could), but you could even create a directory if you wanted. Run the tool using the following command:

debugfs /dev/sda1

Once the debugfs console becomes available, type '?' or 'help' for a command list or just jump to 'ls'. When I tried 'ls', I received the following error:

EXT2 directory corrupted

I could not 'cd' or 'ls' other directories because apparently the root inode was corrupt! So debugfs wasn't much help at this point.

The next tool I stumbled upon was lde, or Linux Disk Editor. This tool needs to be installed seperately; it allows for raw access of the partition so that you can look up Blocks, Inodes, etc. LDE has a basic graphical interface, which also helps. Use the following command to run lde:

lde /dev/sda1

The primary superblock is located 1024 bytes from the start of the drive in Group 0, Block 0 (if block size > 1024) or Block 1 (if block size = 1024). Type 'B' to enter the block view mode, then scroll down until you reach 0x400. To change blocks, use '#0', replacing 0 with the block number you desire. Check out this site for the superblock structure definition; based on the dumpe2fs output, you should be able to compare values and verify that you are looking at the superblock in lde.

Although the root inode block number can be extracted by following the superblock information, I decided to go ahead and start adding debug printf statements to debugfs.c. Once I found the block number for the root inode, I went to the block in lde and was able to view the directory listings without any trouble. But debugfs continued to have problems.

I also tried the following command to force e2fsck to byte-swap the data before trying to access the partition.

e2fsck -S /dev/sda1

This produced the same "EXT2 directory corrupted" error message that I received when performing an 'ls' with debugfs. I started searching Google for information about byte-swapping an EXT2 filesystem. I discovered in a forum and by reading e2fsprogs change logs that e2fsprogs for PowerPC actually supported both little-endian and big-endian filesystems and had a way to distinguish between the two. However, at some point Intel proclaimed that all filesystems needed to be little-endian, at which point the PowerPC community followed by disabling big-endian support. The x86-compiled e2fsprogs always only supported little-endian, but provided the "-s" or "-S" flags to byte-swap the partitions.

So I decided to try enabling the byte-swap in debugfs by temporarily defining do_swap=1 in the ext2fs_read_dir_block2 function found in lib/ext2fs/dirblock.c. I initially enabled it without any logical conditions, which meant that every directory block that needed to be read was byte-swapped. Anyways, I found that I was able to perform an 'ls' using debugfs after byte-swapping the root inode. However, every other directory inode was not byte-swapped, so I used the following line of code to only byte-swap the root inode:

if(block == 1539) do_swap=1;

Where "1539" was the root inode block number.

I was then able to access all directories with debugfs :) The final step was to save the root inode after byte-swapping the inode structure elements. To do this, I decided to use an internal write function to perform this operation when I typed 'ls' once in debugfs. There may be a better, safer way to do this, but I knew that I only needed to run this once and quit, so this is the way I accomplished this. After the while loop but before the return in ext2fs_read_dir_block2, I added the following lines of code:

if(block == 1539)
{
printf("\nwriting to disk\n");
retval = io_channel_write_blk(fs->io, block, 1, buf);
}


Where "1539" was my root inode block number.

After compiling this in, run debugfs with the '-w' flag to allow debugfs to modify the filesystem.

sudo ./debugfs -w /dev/sda1

Run 'ls' once and quit. Don't run 'ls' more than once or you'll byte-swap again. Remove the above lines, remove the do_swap=1 line, recompile and run debugfs. 'ls' should now be able to work with all directory inodes.

If you made it this far, great! Go ahead and mount the filesystem, immediately performing an rsync dump or copy to an extra drive!

Conclusion:
I hope this blog proves useful for people with corrupted filesystems as a result of EXT2/3 partition moving with a commercial partition manager. Feel free to leave comments and I will try to help you out as much as I can. This was a great learning experience, but next time I hope to do it with data that can safely scrapped :/



Links (in no particular order):
http://ubuntuforums.org/showthread.php?t=394744
http://www.howtoforge.com/linux_resizing_ext3_partitions
http://uranus.it.swin.edu.au/~jn/explore2fs/es2fs.htm
http://lde.sourceforge.net/lde_use.html
http://www.linux-m68k.org/ext2swap.html
http://linux.die.net/man/8/debugfs
http://linux.die.net/man/8/fsck.ext2
http://www.reedmedia.net/~reed/journal/2002/20020828.html
http://www.osix.net/modules/article/?id=497
http://ubuntuforums.org/archive/index.php/t-714065.html
http://surprise.sourceforge.net/doc/tech-53.html

For those of you unable to recover your filesystem, try the free Windows tool below:

http://www.diskinternals.com/Linux-Recovery/