Hi, I'm Fuzzy.

This site, Fuzzy's Logic, is a dumping ground for things I find interesting. If you're looking for content I've personally generated you might want to head directly to one of my other sites:

Hi, I'm Fuzzy.

The Shape of Rome

We all know the long, rich history of the Roman people, and the city's importance as the center of an empire, and thereafter as the center of the memory of that empire, whose echo, long after its end, still so defines Western concepts of power, authority and peace. What I intend to discuss instead is the geographic city, and how its shape and layers grew gradually and constantly, shaped by famous events, but also by the centuries you won't hear much about in a traditional history of the city. The different parts of Rome's past left their fingerprints on the city's shape in far more direct ways than one tends to realize, even from visiting and walking through the city. Rome's past shows not only in her monuments and ruins, but in the very layout of the streets themselves. Going age by age, I will attempt to show how the city's history and structure are one and the same, and how this real ancient city shows her past in a far more organic and structural way than what we tend invent when we concoct fictitious ancient capitals to populate fantasy worlds or imagined futures.

The Shape of Rome


Google API's

I've been exporting info from my iRacing Stats application to html to do weekly and end of season updates on my iRacing blog. This process had been fairly manual; I'd manually upload the graph images to blogger, create a new blog post and paste in the exported html and edit the img tags to point to the uploaded images. Its not that it was too painful, but since it was a repetitive task I wanted to see if I could automate it.

So I dived into the Google API doco and pretty quickly worked out how to get my python application to post a new blog update. Getting the images uploaded was more painful, since blogger actually uses picasa to store images and the picasa API is terrible. I ended up using Google Drive to store the images, which means I needed to handle a few more steps than just uploading; namely changing the permissions to public viewing, uploading to a folder and retrieving the public url.

I've got it all working now, including prompting the user for only their blog's URL from which it app pulls the BlogID (rather than having to have the user go off and find it). All in all I'm quite proud of getting it all working. I just hope Google don't go changing their APIs in the near future.

I'm sure the knowledge I've gained here will be useful in many other projects.

Here's a bunch of links which I found helpful:

Here's a bit of code:

from oauth2client import file, client, tools  
from apiclient.discovery import build  
from apiclient.http import MediaFileUpload  
from httplib2 import Http

def blogger_post(outfile):  
 try:  
  html_file = open(outfile)  
  html_lines = html_file.read()  
  chop_start = html_lines.find('')  
  chop_end = html_lines.find('')  
  html_lines = html_lines[chop_start+6:chop_end]

  CLIENT_SECRET = 'client_secrets.json'  
  SCOPE = 'https://www.googleapis.com/auth/blogger'  
  store = file.Storage('storage_blogger.json')  
  creds = store.get()

  if not creds or creds.invalid:  
   flow = client.flow_from_clientsecrets(CLIENT_SECRET, SCOPE)  
   creds = tools.run(flow, store)

  service = build('blogger', 'v3', creds.authorize(Http()))  
  body = {  
   "kind": "blogger#post",  
   "id": cfg.config['Blogger']['blogid'],  
   "title": os.path.basename(os.path.splitext(outfile)[0]),  
   "content":html_lines  
   }

  request = service.posts().insert(blogId=cfg.config['Blogger']['blogid'], isDraft=True, body=body)  
  response = request.execute()  
  return response['url']  
 except:  
  return "Failed"

def blogger_img_upload(filename):  
 try:  
  CLIENT_SECRET = 'client_secrets.json'  
  SCOPE = ('https://www.googleapis.com/auth/drive', 'https://www.googleapis.com/auth/drive.file')  
  store = file.Storage('storage_drive.json')  
  creds = store.get()

  if not creds or creds.invalid:  
   flow = client.flow_from_clientsecrets(CLIENT_SECRET, SCOPE)  
   creds = tools.run(flow, store)

  service = build('drive', 'v2', creds.authorize(Http()))

  q = "title = 'iRacing Stats Graphs' and mimeType = 'application/vnd.google-apps.folder'"

  request = service.files().list(q=q)  
  response = request.execute()

  if len(response['items']) == 0:  
   body = {  
    "title": "iRacing Stats Graphs",  
    "mimeType": "application/vnd.google-apps.folder"  
    }

   request = service.files().insert(body=body)  
   response = request.execute()  
   folderId = response['id']  
  else:  
   folderId = response['items'][0]['id']

  pub.sendMessage('Uploading', graph=os.path.basename(os.path.splitext(filename)[0]))  
  body = {  
   "title": os.path.basename(os.path.splitext(filename)[0]),  
   }  
  body['parents'] = [{'id': folderId}]  
  media_body = MediaFileUpload(filename)

  request = service.files().insert(body=body, media_body=media_body)  
  response = request.execute()  
  fileId = response['id']

  body = {  
   "type": "anyone",  
   "role": "reader"  
   }  
  response = service.permissions().insert(fileId=fileId, body=body).execute()  
  response = service.files().get(fileId=fileId).execute()  
  return response['webContentLink'].split('&')[0]  
 except:  
  print("Upload of %s to blogger failed" % os.path.basename(os.path.splitext(filename)[0]))  
  return "Failed"

def blogger_config(url):  
 try:  
  CLIENT_SECRET = 'client_secrets.json'  
  SCOPE = 'https://www.googleapis.com/auth/blogger'  
  store = file.Storage('storage_blogger.json')  
  creds = store.get()

  if not creds or creds.invalid:  
   flow = client.flow_from_clientsecrets(CLIENT_SECRET, SCOPE)  
   creds = tools.run(flow, store)

  service = build('blogger', 'v3', creds.authorize(Http()))  
  response = service.blogs().getByUrl(url=url).execute()  
  cfg.write_blogid(response['id'])  
  return True  
 except:  
  pub.sendMessage('Alert', msg="Unable to find BlogID of: %s" % url, title="Blogger Config Failed")  
  return False

PhotoFrame PC Update

The Digital PhotoFrame PC located in my kitchen sees near daily use; in the morning while I make coffee I trigger playback of the Australian Broadcasting Commision's 90 second news headlines and NPR's 5 minute news update. While I'm preparing dinner I'll often use it to stream jazz from KJZZ.org. Occasionally I'll use it to display a recipe to follow along with.

All in all there isn't much reason that I'd want to change the setup, it currently functions exactly as I want. However, it is running Windows XP which is no longer support by Microsoft. I don't like the idea of having an unsecurable machine running on the network so I've started planning on switching it to run Lubuntu Linux.

Obviously this would require that I recreate all the AutoIT scripts which currently trigger all of the above things. I'm very confident that I could easily do this via Python based scripts. I've also started looking for one to one replacements for the handful of other features I make use of:

  • Variety to handle background image display and rotation.
  • Conky or LXDE screenlet for weather and clock.

A Rare Peek Into The Massive Scale of AWS

Like many hyperscale datacenter operators, Amazon started out buying servers from the big tier one server makers, and eventually became the top buyer of machines from Rackable Systems (now part of SGI). But over time, like Google, Facebook, Baidu, and its peers, the company decided to engineer its own systems to tune them precisely for its own workloads and, importantly, to mesh hand-in-glove with its datacenters and their power and cooling systems. The datacenters have evolved over time, and the systems have along with them in lockstep.In the past, Amazon has wanted to hint at the scale of its infrastructure without being terribly specific, and so they came up with this metric. Every day, AWS installs enough server infrastructure to host the entire Amazon e-tailing business from back in 2004, when Amazon the retailer was one-tenth its current size at $7 billion in annual revenue.

A Rare Peek Into The Massive Scale of AWS

Google Thinks I'm Interested In

You can find out what topics Google thinks you're interested in via this link.

Here's mine:

  • Action & Adventure Films
  • Android Apps
  • Android OS
  • Antivirus & Malware
  • Arts & Entertainment
  • Audio Equipment
  • Banking
  • Blu-Ray Players & Recorders
  • Bollywood & South Asian Film
  • Business & Industrial
  • Business & Productivity Software
  • Cable & Satellite Providers
  • Camcorders
  • Camera & Photo Equipment
  • Cameras
  • Cameras & Camcorders
  • Chips & Processors
  • Computer & Video Games
  • Computer Components
  • Computer Drives & Storage
  • Computer Hardware
  • Computer Memory
  • Computer Monitors & Displays
  • Computer Security
  • Computers & Electronics
  • Consumer Electronics
  • Dance & Electronic Music
  • Data Backup & Recovery
  • Desktop Computers
  • Electronic Accessories
  • Email & Messaging
  • Flash Drives & Memory Cards
  • Game Systems & Consoles
  • Games
  • Hard Drives
  • Headphones
  • Home & Garden
  • Home Theater Systems
  • ISPs
  • Ink & Toner
  • Input Devices
  • Internet & Telecom
  • Internet Clients & Browsers
  • Jazz
  • LCD TVs
  • Laptops & Notebooks
  • Mac OS
  • Memory Card Readers
  • Mobile & Wireless
  • Mobile & Wireless Accessories
  • Mobile Apps & Add-Ons
  • Mobile OS
  • Mobile Phones
  • Movies
  • Music & Audio
  • Music Recording Technology
  • Network Storage
  • Networking
  • Networking Equipment
  • News
  • Online Video
  • Operating Systems
  • Photo & Video Software
  • Power Supplies
  • Product Reviews & Price Comparisons
  • Recording Industry
  • Security Products & Services
  • Shooter Games
  • Smart Phones
  • Software
  • Software Utilities
  • Sony PlayStation
  • Sound & Video Cards
  • South Asian Music
  • Speakers
  • Stereo Systems & Components
  • TV & Video
  • TV & Video Equipment
  • Tablet PCs
  • Televisions
  • Travel
  • Voice & Video Chat
  • Web Services
  • Webcams & Virtual Tours
  • Windows Mobile OS
  • Windows OS
  • Xbox

Not bad.


Fastest Man on Earth

Sitting alone atop the Sonic Wind, Stapp looked like a pathetic figure. A siren wailed eerily, adding to the tension, and two red flares lofted skywards. Overhead, pilot Joe Kittinger, approaching in a T-33, pushed his throttle wide open in anticipation of the launch. With five seconds to go Stapp yanked a lanyard activating the sled's movie cameras, and hunkered down for the inevitable shock. The Sonic Wind's nine rockets detonated with a terrific roar, spewing 35-foot long trails of fire and hurtling Stapp down the track. "He was going like a bullet," Kittinger remembers. "He went by me like I was standing still, and I was going 350 mph." Just seconds into the run the sled had reached its peak velocity of 632 miles per hour -- actually faster than a bullet -- subjecting Stapp to 20 Gs of force and battering him with wind pressures near two tons. "I thought," continues Kittinger, "that sled is going so damn fast the first bounce is going to be Albuquerque. I mean, there was no way on God's earth that sled could stop at the end of the track. No way." But then, just as the sound of the rockets' initial firing reached the ears of far off observers, the Wind hit the water brake. The rear of the sled, its rockets expended, tore away. The front section continued downrange for several hundred feet, hardly slowing at all until it hit the second water brake."

Fastest Man on Earth

The First Spacewalk

Alexei Leonov did not feel as if he was in motion as he clambered on to the outside of the spacecraft, 500km above the Earth.But in reality, he was hurtling around our planet at speeds that are many times faster than a jet aircraft.The vast, vivid geography of our planet stretched out before him - a giant canvas of contrasting colours and textures.He was the first of his species to see our planet in such glorious aspect.