Schismatrix Plus (Plus?)

A Cyberpunk masterpiece

Recently I had the great fortune to read Schismatrix Plus by Bruce Sterling and I was truly impressed. This is another of those gems like the Culture series by Ian M. Banks or the Xeelee sequence by Stephen Baxter, that for some reason never won the highest honors of written Sci-fi. I would like to think that if Schismatrix Plus was published for the first time today, it might get more accolades than it did during its original publication, because cyberpunk and space-operas are more well-understood and accepted serious genres today, than they were in general in the early 80s. Schismatrix Plus is essentially an omnibus edition that contains the original novel Schismatrix and the other short stories that take place within the Schismatrix universe.

Old Schismatrix Cover

The main novel has several layered themes and stories as it follows humanity through a couple centuries of constant change in the solar system. The story begins with the young aristocratic main character being exiled from his wealthy space station after a suicidal political stunt goes wrong. There is also an overarching cold war theme except instead of the conflict being between communism and capitalism, the conflict is between those who focus on biological modification, the Shapers, and their arch-rivals the Mechanists, who favor cybernetic and mechanical modification. On top of that there are many other themes in the novel, chief among them is the theme of rapid political and technological change and upheaval and how that change is constantly reshuffling the lives of those caught up in it. The changes to human existence introduced by longevity technologies and cloning are explored as the ever aging characters run into each other in different times and forms or raise clones to replace those who they've lost.

For at least the first half of the story the main character is pursued and dogged from station to station by his former childhood friend and arch-rival. Along the way the main character is branded as a "Sundog" which is a term for someone who drifts from station to station without any true allegiances. The narrative of the main character coming to terms with his repeated inability to settle down, is another over-arching theme of the novel and compliments the theme of constant change. Eventually a visitation by a low-key Fergangi-like Alien race, the Investors, upsets the cold war between Shapers and Mechanists as the human race tries to grapple with the fact that they aren't alone in the universe, which further plays into the theme of change. The capitalism conscious alien Investors provide a hilarious solution to the Fermi paradox, the problem of why we haven't met aliens yet, in that aliens never found a way to make money off of the human race until our technology was advanced enough, and so they never bothered to contact humanity.

The other short stories in the book, which take place in the same universe as the main novel, are all pretty interesting. Swarm is a more classic space opera alien encounter story about an insectoid hive species that develops intelligent drones whenever its threatened but is otherwise unintelligent. I see echos of this kind of emergent intelligence in Alastair Reynold's Inhibitors, I wonder if he was influenced by this story? Another story Cicada Queen paints a pretty humorous slice of future life on an economically doomed space station and which features one of the characters from the main novel. The work Twenty Evocation is written in a prose-like style, and the other two stories are also interesting.

Overall, I was pleasantly surprised at the quality of the work both as a literary work and a work of science fiction. Even though this novel was written in the mid-80s most of the human-based technology featured in the stories could exist in a more recent hard sci-fi story. Bruce Sterling has become something of a technology writer since then with a blog host on wired. Schismatrix also reminds me of Ken Macleod's The Stone Canal, with its story of a rival set against the backdrop of different forms of immortality and transhumanism. If your looking for something new to read and you like Sci-fi or Cyberpunk, I highly recommend Schismatrix.

Tags

First posted on 8/29/2016 8:10:05 PM

Creating a NodeJs package for NPM

NodeJS and npm are among the fastest growing development platforms and communities. This is primarliy because NodeJs as its name implies is a javascript-based development ecosystem, which takes advantage of the fact that javascript is one of the most widely used languages today. While I don't have any links, I wouldn't be surprised if some metrics show javascript as the most widely understood programming language. Almost every developer learns javascript because almost every developer at least attempts to build a javascript-enhanced webpage at some point. For this reason, I thought it would be a good idea to make some npm packages so that I could gain some mastery over the NodeJS ecosystem. I had worked with NodeJS in the past to setup a integration test system and to use various common or custom command line tools, but I had never created a package.

Wikipedia says the "Node.js was originally written in 2009 by Ryan Dahl", but as I recall it probably wasn't until about 2012-2014 that the tech started getting more mainstream press and blog posts. Because Node.js was built to work on Google's V8 engine which allowed Node.Js to be used whether Google's V8 engine has an implementation, which I guess is most major OSes and platforms. Now not only can a developer use javascript outside a client-side web site, but also NodeJs has a web server implementation. This means that javascript could be used in a web-app both on in the client-side and server-side code. While that's not that important to me, it does open up the possibility that some of the repitition between client-side and server-side code could be eliminated.

NPM is node's package manager system, its like NuGet for .net or RubyGems. This type of package manager allows developers to easily publish software packages and libraries, and for consumers of those libraries to easily install them. Npm was created initially by Isaac Schlueter and it's first release was in January 2010The only bad thing I can say about npm is that there logo policy is so defensive, I'm not even sure if I can put an image of the npm logo in this article without getting an angry email.

To practice using NodeJs and npm's packaging system I wanted to choose some functionality that would actually be helpful to someone. When I'm reading a stack exchange article and I notice a plain text link, sometimes I will replace that plain text link with a linked text title. So an article with the plain text link 'https://www.google.com' in it would become 'Google' or perhaps a full referenence like "'Google', google.com". Unfortunately I learned that because Stack Exchange is hesistent to allow public automated editing, in order for a stack exchange app to update an article a full website login is needed, making automated editing less desirable or possible. Therefore I narrowed the scope of my learning engagement to performing all of the actions of stackexchange link replacement with the exception of the actual update to the article. I broke the task down into 4 separate packages to maximize the potential that someone will actual find one of the packages useful and download it. Each of the 4 packages is configured with a package.json file.

Package.json is the backbone of any npm package, its the base-level config file. If your familiar with the .net package system, the package.json file is like a combination of a visual studio project file, and a nuget spec file. One the primary functions of the package.json file is management dependencies. The nice thing about npm packages is that its incredibly easy to include references to other projects and to have those references automatically installed when your package is install, so that it automagically works every time. There are different string parameters that can be used to specify whether to be specific about the version of the dependency or allow newer or different versions to be installed instead of the specified version. Package.json can also be configured to install some dependencies only for development, in case you want to setup some special dev tools.

Testing is built-in to the design of npm packages. A script or series of commands can be configured to run when the test command: npm test is executed. This also allows people to chain a series of test commands together so that a series of testing systems can be activated. In this manner, I not only wired up tests but also a vulnerability scanner and style checker, among other widgets. These are run whenever I make a new commit by the free TravisCI server, or can run them at the command-line with a simple 'npm test'. The vulnerability scanner I have configured to run during check-in and test, snyk, has already altered me to 2 new vulnerabilities in the request package, which is the goto package for web requests. Sadly, in my stack-exchange-markdown-retriever project, I have a dependency on the stackexchange and I have haven't been able to the package owner to update his request version, even though I created a pull request to help him. The package owner may not have an interest in their stack-exchange package project anymore, so I may have to replace his package with direct calls to the stack exchange API.

Publishing a npm package is fairly straightforward and like NuGet is open to submission from any registered user. You simply have to register with the npm website to receive an API key used to publish one's package on the NPM network. Unlike NuGet however, npm doesn't allow duplicate package names to be used, I'm looking at you people who stole the name "Random Name Generator" for my NuGet package. My package was registered first so I feel certain ownership for what that's worth, which isn't much. The documentation for publishing to npm is fairly well written and presented which is a hallmark of a framework that's going to be around for awhile.

Likewise updating an npm package is fairly easy. The npm version command is used to update the version number in the package. The cool thing about many of the npm commands, like 'version', is that if one's project is in the GIT repository many npm commands will automatically execute commits and tagging. So the npm version will actually create a commit and a tag for the version automatically when its called. There are other examples of GIT integration but yet another reason why the npm/nodeJs platform is both powerful and modern.

Here's some lines to the npm packages I've created:

So creating a nodejs npm package is not that tough, nor is maintaining one. NodeJs is a technology that turns javascript into an all-ecompassing server-side and browser-side languages. In some ways Javascripts it the only language for which is the possible, because its the only programming language that it explicitly implemented in all modern browsers. Javascript while not the most web designed language ever created, at least takes advantage of some of the C++/Java idioms that make it understandable to even first-year computer science students. I expect that the NodeJs platform will likely be a powerful slice of the web development market for at least a decade or more, so learning more about it has a fair chancee to be useful in some way for one's self.

Tags

First posted on 8/11/2016 7:47:30 PM

Playing with F#

Recently I decided to check out F# and make a nuget package with it to sample the language. I've heard a lot about F# over the years, so why not take the language for a test drive and see what all the fuss is about. To me creating a semi-useful NuGet (or npm, or whatever) package is a proof-positive way to demonstrate basic skills in a language, framework, or platform.

Wikipedia says that F# is a "is a strongly typed, multi-paradigm programming language that encompasses functional, imperative, and object-oriented programming techniques", which is a both a helpful and unhelpful description that sounds like the language could have almost any feature. To some extent that's true, like C#, F# borrows from other 'paradigms' so that its has a wide mixture of language features. While F# supports "functional, imperative, and object-oriented programming techniques", its is intended as a "functional-first language" language, meaning that while you can use it like in an imperative or object-oriented way like C#, its intended usage style is as a functional language. Functional Programming is paradigm where most of the code is broken down into a series of function evaluations rather than focusing on state as in object-oriented programming.

The main work-horse of the IsImageUrl algorithm is the match keyword, or 'match expression' as the usage of the match keyword is sometimes called. Match expressions are like C#'s switch keyword on crack. But instead of just matching on an enumeration, string, or number value, like in the switch keyword, match expressions can match on complex logic. This essentially makes match expressions into workhorse of many F# methods and examples, performing the role of a chain of else ifs. Here's an example from the IsImageUrl lib I wrote:

    let IsImageUrl(opt:string) =                
      match opt with
        | null | "" -> false
        | url when Uri.IsWellFormedUriString(url, UriKind.RelativeOrAbsolute) = false -> false
        | url when hasAnImageFileExtension url -> true 
        | url when hasAnNonImageFileExtension url -> false 
        | _ -> requestUrlAndCheckIfImage opt      

Here the match keyword is checking a series of conditions and then branching the logic to produce different results. C#'s switch by contrast usually only works on value cases, unable to do execute specific conditional logic for each case.

I chose the concept of checking for image urls as a test project because I felt it would both be limited in scope and useful. By looking at the packages implemented in NodeJs's node package manager and comparing those packages to ones available in Nuget, I came up with some possible sample projects. "is-image-url" is a simple image url checking package in node, with no comparable package in .net. So I figured why not make a .net package that performs a simliar purpose. Smaller packages are more common in the npm world than they usually are in the nuget .net ecosystem. Nonethelss sometimes small targetted packages, like IsImageUrl, are preferrable to fully featured frameworks because there is no desire to learn how to use a heavier package in many cases, if only a small set of functionality is desired.

Main algorithm in my IsImageUrl is accessed by using an extension method IsImageUrl. Extension methods are not normal part of F#, so I had to use a compiler attribute System.Runtime.CompilerServices.Extension. I want to target C# for the package, because I'm already familiar with testing in C# and C# is the most used .net language. The algorithm then tries to guess if the url is an image by checking its extension. If it can't determine use a file extension to determine if a url is for an image, it then checks requests the url and checks the metadata returned to determine if the url is for an image

After my brief outing with F# I would say that I'm more impressed than I expected to be. F# has a lot of smart features and is flexible enough to allow C# object oriented programming. That said the language does not really lend itself directly to procedural or classic object-oriented design, the focus is on functional programming. Functional Programming to me while extremely powerful is also one of the more difficult styles of programming to master intially. Unlike procedural and object-oriented programing which are more intuitive, if more verbose, paradigms. When I code a feature, my first thought is "can a rookie developer learn how to maintain this code without too much trouble or time". If I get super-rich or promoted, I need to be able to pass off my code as quickly and cheaply as possible and that means writing code that is easy to read and understand, especially for total novices. For that reason I wouldn't really recommend F# except for experienced teams that plan on holding onto their code for awhile.

I think F# is a great language to learn to try and improve your craft as a software coder. It rivals C# in the number of different features and ideas that have been jammed into it. Hopefully my sample extension method will also be useful to someone as an actual tool.

Tags

First posted on 6/7/2016 10:36:51 PM

Out with the old, in with the Umbraco

I've been meaning to replace my old blog for a long time now, but I was dreading getting my hands dirty with it. I made the original blog more than 5 years ago back when Asp.net MVC was still being developed and my blog had been based on an early example of a blog called oxite.

Oxite

My primary motivation in using the oxite blog was to prove to myself and others that I at least understood how to modify and deploy an asp.net MVC website, which the exercise more or less did. The problem was that the oxite sample was not particularly rigorous, as it was intended more as an example, and to make matters worse eventually later versions of Asp.net MVC made the blog sample more or less obsolete. Also, I later learned there was some sort of security hole, probably sql injection, in the comments section or perhaps elsewhere, and I had to make the comments readonly, further declining the utility of the blog. To add to all of that, its base design was not particularly pretty compared with the responsive mobile designs of today. I decided that I needed to scrap the site and start over with a new system, preferably a CMS, but there are so many, which one to choose?

In a recent project, I got to a chance to develop new features for a website based on the Umbraco CMS. Having worked on a number of CMSes written in PHP and other languages, I know that CMSes often have ugly or confusing design concepts and sometimes they are missing features unexpectedly. So I was not particularly excited about working on a CMS-based application. But with Umbraco, I was pleasantly surprised to find a nice, clean, straightforward, and easy to modify CMS, written using some of the latest web technologies, like AngularJS.

Umbraco

I would normally paste an image of the umbraco logo here to make the article less boring, but apparently the for-profit side of Umbraco has some weird trademark issues with the logos, since they make money from official training.

Setting up and installing Umbraco was fairly straightforward and easy. The CMS generates and uses a SQL Server Compact Edition database, so it will get itself up and running with no real database setup, at least initially. Once you decide to deploy the database, however, you'll have to use SQL Server Compact & SQLite Toolbox to generate create table and data insert scripts. Which was kind of confusing at first, but eventually I learned how to migrate the data out of the Compact database with ease.

SQL Server Compact & SQLite Toolbox

Next I began porting all of my old blog posts over to the default Umbraco blog setup. I ran into trouble with the default post document type and template not displaying full html as my original blog had. Perhaps I needed to use some other document type and template, but none of them seemed to fit what I needed at the time. So I figured it was a good time to explore the creation of custom document types and templates. I had some experience with template creation before, but I felt it would probably be good to go over it again to cement my knowledge of the basic building blocks of the CMS.

After creating a new html/blog friendly template and document type, I started to look at how the blog looked overall. The default web site design was based on the Overflow design at HTML 5UP, which has a series of creative commons Html 5 site designs.

HTML5 logo and wordmark.svg

I noted that there were several stylistic issues that probably needed some editing. First off, the first page just featured my name with nothing else and the second page was just a single button that led to the next page. While these elements made for a slick demo, they kind of seemed like a waste of space for my purposes. So I stripped out those elements and, with some help from my beautiful wife, cleaned-up and reworked the html and css for the site.

Next, I looked at the my ajax-enabled demo examples/blog posts like the random name generator or the json pretty printer and looked at how to implement custom ajax functionality in Umbraco, another task that I had completed before but that I felt it couldn't hurt to get some practice on. In order to expose an MVC controller for API usage, one has to implement a controller that inherits from the Umbraco Surface Controller. After that, it's pretty much like any other asp.net mvc controller; the only major difference is that, like web api, the route urls are usually prefixed with '/api/'.

Finally, almost everything seemed in place, except I wanted to display links to my github and nuget accounts and the built-in social icon display partial view didn't have code for either of those. I modified the umbSocial partial view to add spots for those two sites, and my wife found a github icon among the ones provided by Font Awesome. However, we couldn't find an icon with font-awesome that was associated with nuget.

My wife found a nice blog post on converting an image into a font at WebdesignerDepot.com. First, she used inkscape to convert the standard nuget icon into a vector graphic, and then used one of the online converters mentioned in the article to convert the vector image into a font. Lastly, we modified the css to include the new icons, and that was that.

After all is said and done, I think the blog turned out pretty well, and I got some nice training setting up and deploying an umbraco blog.

Tags

First posted on 3/22/2015 11:30:22 AM

Json Pretty Printer/Beautifier Library For .Net

In the current project I'm working on, I need to create a lot of configuration/data files for when the application builds the database (Fluent-Nhibernate). This appears necessary because the application must populate the database with some default mostly static domain objects that are required for the application to begin working normally.

The two main .net json Beautifiers were Jayrock, and the C# source code of a "quick and dirty" json pretty printer by Raymond Glover . I actually like Raymond's a little better, because Jayrock is more of a general purpose Json library rather than being focused on pretty printing.

Raymond's was cool, but I wanted one that would conform exactly to a specific json beautifier JsonLint . Seeing how short and easy it was for Raymond, I decided to write my own json beautifier/pretty printer and wire it up to my blog. I think I mostly wanted an excuse to use the strategy pattern in a situation where it appeared to be needed.

You can download the library using Nuget (Nuget Project Page) or you can get the source code here. [223k, zip]

You can use the pretty printer object or just the extension methods I've provided. I've included json and/or beautifing extension methods, for ease of use.

Tags

First posted on 4/16/2010 6:47:00 PM

jqGrid missing search codes

One thing in particular I found missing, was that when you are implementing searching you have to be able to translate the search operator codes sent by the grid to the server into something you can use on the server side. One mildly frustrating problem was that the documentation only lists 9 of the 14 total search operator codes that you have to translate.

I'll list the rest of the codes here:
bn - not begins with ( !(LIKE val%) )
in - is in ( checks if the searchField is in the searchString )
ni - is not in (checks if the searchField is not in the searchString )
en - does not end with (!(LIKE %val) )
nc - does not contain (!(LIKE %val%))

Tags

First posted on 1/3/2015 6:18:36 PM

Random Name Generator .Net Library

While I was working on my current project, I found I needed the ability to generate a name randomly. Pretty simple task, but I have been unable to google a free .net random name generating library that I could use. After reading this question, I came to the conclusion that no such library exists. So I decided to make one myself, and give it out for free in the hopes that it might generate a few hits.

You can try it out right here:

: Click To Generate
: Click To Generate

You can get the library using nuget (Nuget Project Page), or you can download a zip file containing the DLL here.

Tags

First posted on 1/3/2015 6:17:37 PM

There Can Be Only One!

One of the main reasons I setup my own custom site, was to begin experimenting with SEO. Search engine optimization is one of the primary ways to get new surfers to your website.

What does that mean in practice?

Trying to figure out ways to get your website listed higher in google search results.

Usually this is done by getting people to link to you, as well as creating content for your website. Preferably content that {gasp} is actually useful to someone other than yourself. Part of the motivation for this very post, in fact :p.

Other than creating content, I have also signed up for google's Web Master Tools, and Analytics. I don't know if I've ever come across a serious web page that did not use google analytics, and I figured that google might rank me higher simply by having more information about my website. Also I added a robots.txt to help the robots crawl the website, and soon I'll be adding a site map.

We'll see how it goes, but I'm already watching my site move up in the results of different search queries!

Tags

First posted on 12/14/2014 1:17:52 AM

SchemaUpdate for NHibernate

I was looking into how SchemaExport and SchemaUpdate work for NHibernate.Tool.hbm2ddl. I'm about to put another website up and I wanted to be able to generate the database schema for the new site, instead of creating the tables by hand. I've done this for a couple applications before, but I have to admit to only vaguely understanding how hbm2ddl works. This is probably do to the almost total lack of documentation on these features, outside of a quick run-down in the nhibernate docs .

At first glance it looks like Export generates SQL create table statements, and SchemaUpdate generates a bunch of update statements. But one thing I was curious about was whether or not SchemaUpdate would create new tables if they didn't exist. After looking at the current nhibernate source code, it appears that SchemaUpdate iterates through each table and if the table exists in the destination database then it generates an update script, otherwise it generates a create script.

Tags

First posted on 12/14/2014 1:14:19 AM

Encryption Blues

Currently I'm trying to create a single-sign on solution, and I was having trouble because sometimes (but not always) the encrypted text would fail to decrypt. I found out that the later .net versions have stricter checking for valid characters. So when the encrypted byte array was converted to a string, the encoding process would create characters that the stricter decoding process did not like. This article on msdn's .net security blog finally helped me out.

Tags

First posted on 12/8/2014 8:10:58 PM

CV

Summary

I've worked on and built responsive, high-traffic websites with the latest web technologies for large, high profile companies. I've built and worked on applications and websites of almost every scale, from small businesses to large, government healthcare.

I can build or design a technical solution for almost any solvable business problem in a wide variety of languages - given a reasonable amount of time. I pick up new technologies and frameworks rapidly and quickly identify and adopt best practices for coding and usage.

I've worked on large, on-site teams, distributed/WFH teams, "small teams", and solo, and I have succeeded in these different team organization structures. I've trained multiple developers and DBA's to be productive and successful and some of my web posts on technical topics have had hundreds of thousands of unique visits. I can truly help an organization achieve almost any practical business or technical goal.

Education

Bachelor of Science, Computer Science
University of Georgia
Athens, Georgia
2003

Courses Included: Object-Oriented Programming, Human-Computer Interaction, Database Management, Operating Systems, Computer Architecture, Evolutionary Computation, Software Engineering, Algorithms, Data Structures, Compilers, Web Programming, Computer Graphics

Technical Skills

Languages and Frameworks
Asp.net MVC, Entity Framework, C#, CSS 3, HTML 5, Javascript , jQuery, RequireJS, Java, Scala, Play Framework, Git, Autofac, Umbraco, FubuMVC, Groovy, Grails, Windows Forms, .Net Framework, LINQ, Visual Basic .Net, Visual C++.NET, C++, SQL, T-SQL, PHP, Python, ASP, Aqualogic Service Bus (ALSB), Flash, JSP, Servlets, RSS, XML, ActionScript, Direct X, Visual Basic classic, Basic, C, OpenGL, Podcast Feed Aggregations

Development Applications and Web Servers
Visual Studio 2013, 2012, 2010, 2008, 2005 and 2003, Visual Source Safe, Team Foundation Server, Teamprise, Jira, Bitbucket, GitHub, Eclipse, NGEN, SQL Server Management Studio, Firebug, NSIS, Apache, Tomcat, Flash Com Server, Internet Information Services, Weblogic, SVN, CVS

Patterns and Techniques
Experience with Test-driven development (TDD), Domain-driven design (DDD), and Agile development techniques and team organization.  Very familiar with many design patterns, ie Factory, Decorator, Visitor, etc., and refactorings

Database Applications, Layers, and Files
Microsoft SQL Server 2012/2008/5/0, SQL Server Reporting 2005, SQLite, NHibernate, Fluent NHibernate, Entity Framework Code First, Linq To NHibernate, Hibernate, nettiers, MySQL, LLBLGen, Castle ActiveRecord, Strongly Typed Datasets, CSV, Excel and Access Automation

Libraries
RhinoMocks, Castle Windsor, Spring-source, Log4net, Log4j, StructureMap, Autofac, Twitter Bootstrap, LESS, Sass/SCSS, Nunit

Reading
Design Patterns, Refactoring, Refactoring To Patterns, Test-Driven Development, Domain-Driven Design

Operating Systems And Desktop Environments
Windows Vista, XP, 2000, ME, NT, 98, 95 and 3.x, Windows Server 2003, Linux, Dos, Unix, SunOS, Apple, Macintosh, MacOS, Solaris 7, 8, 9, Gnome, and KDE

Experience

Web Developer
The Nerdery
Mar 2014 – Mar 2015

Worked on a marketing campaign website for a major clothing retailer using Asp.Net MVC.  I built .net web application features that used facial recognition to convert user images into similar emojis that they could use online.  After the conversion process, the user was presented with a library of vector graphic images that they could use to customize their emoji using javascript and ajax.

Developed features for an Umbraco based website for a major martketing company. I created from scratch a full range of Umbraco templates and objects integrated with Asp.net MVC and AngularJS.

Helped build a large Asp.Net MVC website with the full range of modern social networking capabilities. During that project I built a Naive Bayes Email Classier in order to sort through, and collect data on user emails based on different criteria. Created a full implementation of Amazon's Mechanical Turk system, so that administrators of the website could request business information through Mechanical Turk.


Software/Web Developer
Denim Group
Apr 2011 – Mar 2014

Designed and built a massive enterprise level website for Medicare and Medicaid health care benefits distribution web site for a wide variety of medicare and medicaid programs. The application managed and stored data on clients, providers, and claims, as well as using several external interfaces. We used Asp.NET MVC, with Autofac for DI and Entity Framework for an ORM.

Migrated the entire set of State Cancer research data from an obsolete architecture to a new system provided by the CDC.  At the same I lead a team to create a new enterprise level hospital and provider management and tracking system using Asp.NET MVC and fluent Nhibernate.  The project had failed previously 3 times before my team and I succeeded and exceeded client expectations.

Developed features for UFC.com, a large sporting/media website, and created several world-wide tracking poll systems as well as a variety of improvements and fixes across the site.


Software/Web Developer
Cobb Information Systems
Sept 2008 – June 2009

Created power-saving inspection web interface using Asp.net/Sql Server 2008 and a data importing process that utilized Microsoft Office automation.

Integrated global web services into an Aqualogic Service Bus using Java, Weblogic, AL Data Services Platform, and Eclipse.

Created the front-end and user management system of a Customer/Account lookup web site in .Net.


BuildASign.com
Software/Web Developer
Aug 2007 – Aug 2008

Created or modified almost every system of a large commercial website, front and back end. From ecommerce, to graphic processing, and customer facing web pages.

Designed and implemented an international version of the website, adapting every aspect of the site for international commerce. The international version of the site featured new product management administration as well as interchangeable measurement and currency systems, local postal rules, international shipping systems, and all the various subsystems dealing with international payment and credit card processing.

Implemented various complex customer facing web pages using the latest technologies and controls. AJAX and other complex Javascript controls were created or adapted from open source projects and integrated alongside advanced third-party commercial controls.


Software/Web Developer
The BHW Group
Apr 2006 - July 2007

Worked as part of a team to design and implement the most advanced real estate appraisal application on the market, using third party windows forms controls and Microsoft SQL server for data storage with a LLBLGen Pro Data layer. Implemented a messaging system, data importing/exporting system for a wide variety of databases and file types, as well as many other features and UI elements.

Almost completely rewrote and redesigned a broken document management system for a career fair website system.

Created a website that allows a distributor's website to connect to a supplier's inventory system using XML requests and responses with typed datasets. System also included a database that was filled daily from a console application that loaded and parsed the supplier's excel file-based inventory.


Software/Web Developer
TV Eyes Inc.
Dec 2004 - Apr 2006

Designed and Implemented a series of applications designed to poll Internet feeds, download any form of audio and video, transcode that audio/video into streaming files, and index that audio and video into text for searching. I used C# to write much of this enterprise level app. I used some actionscript and flash to create an audio and video playback widget, which was a common solution at the time. Users could search for almost any word or phrase in a massive and extensive list of internet audio and video, and be shown the click immediately.

Create a video emailing system, that allows a user to select video from an archive, and have that the video mailed to them along with a transcript of their selection, after it has been automatically merged, and clipped to their specifications.

Created two different applications for recording Closed Captioning from the US and teletext from the UK, by interacting with tuner cards, and inserting that text into a database. Built largely with C++ and DirectShow.


Software/Web Developer
Personal Business
June 2009 - Apr 2011

Built an enterprise-level, online web site game for ad revenue with my wife, who is also a developer. The application was built using domain-driven design and in some cases, TDD, and google-maps-esque javascript/jquery functionality.

During this project I learned and experimented with various design theories and architectural patterns. I also setup a continuous integration environment that ran automated unit and integration tests, as well as automated interface tests.

It may take me a little while to respond, I get spammed a lot.

Sending your message...

Thanks! we'll get back to you soon.