Lastly, I’ve also confirmed the test suite is running under Python 3.2+. This is my first foray into the brave new world of 3.x, so please open tickets for any issues or suggestions.
I finally read Dune. It was more of a fantasy story than pure sci-fi. The picture Dune paints reminded me of something you’d see in Heavy Metal or some other comic, so it was a pretty fun read.
Then I made the mistake of watching the movie.
First off, the positive. The movie really tries to fit as much as the book as possible. It uses a narrator to fill in a lot gaps and includes the internal dialog prevalent in the book. It ends up being pretty cheesy though and reminded me of The Wonder Years.
Now, obviously the book is better the movie. What is funny is where the movie took liberties. Generally, the “powers” of the different characters feel more magical than in the book. This isn’t a huge deal, but it cheeses the movie out.
The worst and most ridiculous is the milking of the cat.
In the book there is a character that is captured by the antagonist camp. He is given a poison that requires a daily dosage in order to keep the mortal effect at bay. In the book they give it to him in his food.
What do you think they did in the movie?
That’s right. They brought in a totally stupid contraption built around an annoyed white cat that had a rat on its back and told this character that he had to milk the cat every day in order to keep the poison at bay. I have no idea...
The worst part of it all was that the movie made the book feel cheesy. It was so bad that the imagery and story the book painted started to feel like a cheesy 80s B sci-fi flick. It was kind of bummer.
I read an article about a new retinal HMD. Virtual reality has never been a huge interest of mine, but seeing as I look at a screen all day as a programmer, anything that could improve what I see all day seems worth a look (no pun intended).
It occurred to me that the development of this sort of technology is really similar to a modem. If we think back to early days of the internet (it wasn’t really that long ago in the grand scheme of things) we had modems in our computers. A modem is a “MODulator DEmodulator” and it took analog sound from the phone line and translated it into data for your computer.
Our eyes act like a modem. We sense changes in light and translate that to data. We then act on that data accordingly. Sometimes that data causes us to blink while other time it stirs up emotions. This last bit is why I believe most of these technologies focus on films as an example use case. A movie is really a physical representation of experiences as told through light. If a movie causes the viewer to experience some emotions, it has effectively communicated it message.
I’m still on the fence as to whether this sort of direct analogue connection to our brains is beneficial or just plain old scary.
I’ve release CacheControl 0.7.1. This release includes patching of the requests Response object to make it pickleable. This allows you to easily implement cache stores on anything that can store pickled text. I’ve also added a Redis and Filesystem based cache.
I also added docs!
Please give it a go and file bugs!
Virtualenvwrapper is a really helpful tool that allows you keep your Python virtualenv’s organized in a single location. It provides some hooks to make working with a virtualenv in a shell simple. Unfortunately, it is not well suited to organizing automated virtualenvs used in a project’s build tasks.
First off, I should say that my goal with any build task is that it can be run without any external requirements. No environment variables should need setting. No virtualenv activated. No other services be up and running (within reason). My goal with any project is to support something like this.
$ git clone $project $ cd $project $ make bootstrap $ make test $ make run $ make release
The problem with virtualenvwrapper is that it assumes you are using it from a shell. It implements its functionality as shell functions. It is necessary that it does this because it is impossible for a child process to adjust the environment of the parent in a way that lasts after the child process ends. Virtualenv’s user interface wants to enable a virtualenv after it has been created, so the shell functions are the best way to do this.
None of this means that a developer cannot use virtualenvwrapper. It simply means that using virtualenvwrapper to create and bootstrap your environment is more complex and could be more brittle over time. It is safer and more reliable to just create the virtualenv yourself, while making it configurable to utilize a virtualenv previously created by virtualenvwrapper.
There have been a ton of discussions regarding privacy recently in light of Snowden NSA revelations. Many discussions revolve around encryption, using services outside the US and generally how to make it difficult for a snooper to read information passed around the internet. I’m definitely in favor of tools the enable keeping information private.
At the same time, wouldn’t it be better if our understanding of the internet and technology changed such that the users could be considered the owners? It seems as though tools like copyright, professional privilege and unlawful search and seizure should extend to our life online. Privacy as an ideal should not be limited by the current technology of the day. Privacy is a concept that should permeate our laws, no matter the current state of technology.
As an aside, I’d wonder if the RIAA should consider suing the NSA for copyright infringement. Imagine the number of songs and copyrighted works that flow through email the NSA might have “copied” digitally. Anyway...
I’m sure the government is never going to offer its constituency true privacy. To put it generally, it makes life harder for law enforcement. If you can’t see what people are doing, then how can you punish (and most importantly tax) them. I have a hunch that there are still plenty required mediums that make auditing and discovering wrong doing possible. A warrant is a piece of paper that describes an exception to privacy. That seems like a pretty reasonable way to go about finding evidence. After all, we are innocent until proven guilty.
On the other side of the coin, I believe society could greatly benefit from a society of privacy. That little black box the insurance companies want to put in your car would be more appealing if you owned the data it recorded and could feel safe the government can’t simply ask the insurance companies for the data without proper cause. Smart phones could be tracking your every move for your own usage, not the governments. Technology can create new ways of recording and using data without having to be concerned that user’s privacy could be compromised. There is a world of automation that is available when you don’t have to worry about the data becoming public knowledge.
I don imagine any of this will happen. Most likely our government will continue to make hidden strides into destroying privacy in order to maintain power. Technology will try to curb this threat to privacy and users will become increasingly accepting of big brother watching everything we do. I only hope that some in our government will realize the danger of stealing privacy and make a stand to keep it safe both now and in the future.
At work we use two frameworks, Django and CherryPy. The decision to use one or the other typically comes down to who is starting the project and, to a lesser extent, whether the app is primarily a user facing app or an API. For example, if we need to put together an app to show off some data publically, Django is our go to framework. If we are creating an internal REST API for other services, CherryPy is typically the way to go.
Developers typically feel more comfortable with one framework. I’m definitely a CherryPy guy, while the rest of the folks on my team fall on the Django side of the fence. The result is that I’m often working on Django code, which ends up being pretty frustrating.
First off, the nice thing about Django is that if you commit to the ecosystem and learn it, there is a wealth of 80% tools you can use to create a functional web app. This is true of any opinionated full stack framework and I’d consider Django a prime example. When you understand Django, you can get a lot of stuff done.
The problem is that when you don’t know Django, getting things done is challenge. The reason being is that the framework hides general python techniques in order to hide complexity. As I said, when you understand what happens under the hood, hiding the complexity is fine. The problem is that many full stack frameworks, such as Django, don’t make it easy to look under the hood and follow the stack to the necessary code.
CherryPy, on the other hand, makes uncovering the layers of complexity much easier. You can typically isolate bits of the framework relatively easily and test them in a prompt or simple script to discover issues. The source code is also small enough that diving into its algorithms is not unreasonable. Sure, the documentation is lacking, there are fewer high quality plugins and you will probably have to make more decisions as to how to implement common idioms, but the result is that uncovering the logic is rarely a problem.
Personally, I like CherryPy because you can take the codebase and figure what is going on. When you do hit frameworks such as sqlalchemy or templates such as mako or jinja2, the documentation is typically of a high quality because of the smaller set of topics that need covering. Also, while it is possible to create CherryPy specific integration points, it is just as easy to write your own classes and functions to hide complexity as the need arises.
It can be frustrating working on Django because it is difficult to peel back the layers. For example, we use Tastypie for some API endpoints. It is exceptionally nice for exposing models. You get pagination, multiple authentication schemes, and a whole host of other bits that are nice. That said, when you need to adjust the API, it is cumbersome and produces somewhat ugly code. Here is an example, from the docs.
class ParentResource(ModelResource): children = fields.ToManyField(ChildResource, 'children') def prepend_urls(self): return [ url(r"^(?P<resource_name>%s)/(?P<pk>\w[\w/-]*)/children%s$" % (self._meta.resource_name, trailing_slash()), self.wrap_view('get_children'), name="api_get_children"), ] def get_children(self, request, **kwargs): try: obj = self.cached_obj_get(request=request, **self.remove_api_resource_names(kwargs)) except ObjectDoesNotExist: return HttpGone() except MultipleObjectsReturned: return HttpMultipleChoices("More than one resource is found at this URI.") child_resource = ChildResource() return child_resource.get_detail(request, parent_id=obj.pk)
First off, you have to understand a suite of concepts. Tastypie generates URL regexes for you. You can override these via the prepend_urls method. Second, the get_children method contains some custom exceptions that come from Django core that are caught in order to return tastypie specific error return values. Finally, the get_detail method is a helper that automatically will render the object found in get_children method and return a proper tastypie response.
As you begin to understand the code it is not a huge mystery what is happening. With that said, there is a lot of reading that has to happen before you can begin to understand what is really going on. You also have to understand the implicit barriers between tastypie and django. Finally, these are all on a semi-magic set of Resource objects that inject into the list of URL patterns, removing the benefit of having all your URLs in one place.
Hopefully it is clear how trying to understand and debug this type of code is challenging and can be frustrating. While it hides a great deal of complexity for you and adds many feature that you may or may not need, it presents a chasm between the code and the actual impact that must be crossed by reading documentation.
At this point I should mention that this kind of code is a pet peeve of mine because it is difficult to maintain. Someone approaching this code without a strong background in Django and Tastypie would have to spend a good amount of time gettig up to speed before being able to try and fix a bug. What’s more, that person would not be able to simply open up Python prompt or write a test without further reading about what specialized tools are available and how to use them. Obviously, it is not a waste of time to make the investment, but for me personally, I’d rather learn by writing code, isolating functionality and writing tests than reading docs, hoping they are up to date.
I decided to take the plunge and buy a keyboard with mechanical switches. What put me over the edge was the claim that it improved typing accuracy because your hands got used to sound and feel of the actual switch.
After using it for a week or so, I can’t say that I’m in love just yet. The sound of the keyboard really is loud. I’ve found I have to pay attention to how I’m typing as well. I’ve heard that once you get used to it you don’t really press all the way down because you can feel where the switch engages. Whether or not this is entirely true, using a light touch seems to help avoid mistyping. The most common error when I’m trying to type quickly is when the wrong letter comes first. It is almost like a real typewriter in that it feels like I need to type slowly and deliberately in order to make sure I get it right. Another common frustration is repeated keystrokes. Often when I have to delete more than one letter or word, I’ll press the delete key and not fully press it where it engages. The result being I start typing and have to start over since due to the spacing being incorrect.
I will say that I’ve never been a very strong typist. While I can type quickly at times, my error rate is pretty high. My hope is that this new keyboard will help improve my accuracy and so far I think it might be working. At least as far as typing on this keyboard is concerned. When I move to my laptop keyboard it feels rather foreign and takes a bit of getting used to. That also might be due to having a new X1 Carbon rather than my MacBook pro. The X1 has a pretty good keyboard, but it I wouldn’t consider it any better than my mac keyboard.
I do hope that this change is helpful. The keyboard does feel really rugged and having something new to type on does add a little spice to writing code. At this point I can’t say I’d recommend mechanical switches, but I can definitely see how over time someone could really fall in love with the feel and sound.
For the past few years I’ve been more or less happily developing on OS X. Thanks to Emacs, I have a nice text base interface to work with that allows me to manage most of my core applications (text editor, IRC, email, terminal, etc.) in a keyboard centered environment. At the same time, I missed the stripped down environment of my tiling window manager choice, StumpWM. A recent associate moved on to “googlier” pastures and left behind a X1 Carbon that was up for grabs. Seeing as my MacBook Pro always had problems running VMs and the disk was always almost full, it seemed like a good time to switch.
When switching environments, it usually is a time you are forced reinvestigate the current tools available. Here are some tools that I’ve found interesting.
Helm is the reinvention of anything.el. Many people compare it to Spotlight, Alfred and Quicksilver on the Mac in that it helps you configure smart look ups to find things. People use it for everything from autocomplete to a nice interface to spotify. The spotify video inspired me to write some code to browse the files in my recently converted blog. It was really easy to do and puts a pretty face along side a usable UI for very little time.
Anyone who follows Emacsrocks probably already has seen Expand Region. It is a really simple package that helps you to semantically expand what is selected. Here is a short video showing how it works. The nice thing is that if you are refactoring code, this makes it easy to select the current expression, function or class and cut/copy it where you need it to go. Likewise, you needed to search/replace in a semantic block, it is trivial to do without having to move around to make the selection.
Toolz extends itertools, functools and operator modules in order to provide a more robust functional programming pattern in Python. After playing with it a bit, it was clear how helpful a tool it can be in a distributed processing model. It is trivial to construct a complext pipeline of transforms and pass it to a multiprocessing pool to quickly crank through some data.
There are tons of tutorials and libraries out there for creating proper unix daemon. PEP 3143 proposed a module in the standard library since it is something that hasn’t changed in a long time. The result was python-daemon. The python-deamon module is really easy to use and makes doing helpful bits like changing working directory and capturing stoud/stderr trivial.
Invoke is a python build tool that is similar to Paver. What is interesting about it is that it has a mechanism for including other source files as extensions. It has a focus on calling multiple tasks at the same time and handing each task’s arguments correctly. I haven’t had a chance to mess with it very much, but my cursory overview has been positive. It cleans up a couple annoyances I had with Paver regarding task arguments. It also comes from the folks that wrote Fabric.
This process of setting up my dev environment has been fun. It has been much simpler to get my emacs up and running thanks to keeping my config files in source control and package listings up to date. My fingers remembered how to use StumpWM. It is as though I never switched! Hope you enjoy my recent finds!
I’ve had a VPS host since 2007 in order to run Python web apps. The reasoning is that most shared host, in addition to killing long running processes, rarely made it easy to create your own environment. I’ll remind you, this was in the early stages of virtualenv and there were still hairy tutorials on how to get a Python WSGI process running on Dreamhost.
Since then the landscape has changed quite a bit. There are far more hosts that support long running processes. There are services such as Heroku that make deployment of Python apps a cinch. VPS hosting has also become more common and easy to get up and running.
Beyond the technical differences, the biggest reason I switched from VPS Link to Digital Ocean was because of the price. As anyone who has used a VPS, ram is a fleeting resource. With only 256MB running any LAMP stack is pushing the limits. Nevermind being able to use something like MongoDB or some other more interesting NoSQL store. I’m now spending $10 a month for a gig of memory where I was spending $25 a month for 256MB. It was a no brainer.
The other change is that I’ve switched from Pelican to Tinkerer for my blog. I’m not positive I’ll stick with it since sometimes it is nice to have the WordPress infrastructure in place. Now that I can actually run a database, I wouldn’t mind switching back. For the time being though, I’m going to check out Tinkerer and write some elisp to make using it easy in Emacs.