Start using virtual environments! (instead of mayapy)

Virtual environments are pretty well-established in Python, and have been so for quite some time; however, after chatting with a couple of TD acquaintances, and even seasoned programmers in the game industry, I’m still running into people who either have never heard of them(!), or don’t use them because “they’re too troublesome to setup”, or do use them, but in esoteric manners that don’t really allow themselves to leverage the advantages of working with them.

This is something I’ve been meaning to do a detailed write-up on for a while now, but I had always been putting it off because I assumed that most Python developers, like me, would already have learned this painful lesson in their workflow process after having enough experience. So, if you don’t use them, or have no idea what they are, hopefully you’ll find the information in this post to be of some use!

As I did in my post about Inno Setup and deployments, I’m going to write this post in the form of a problem/solution format: I find I ramble less that way, and it’s easier to read as well (in my opinion).

So, what are virtual environments anyway?

If you didn’t read the first link, a Python virtual environment is essentially a standalone Python interpreter that lives on its own, somewhere on the filesystem. If you use Maya, for example, you may be familiar with mayapy, which is Maya’s standalone Python interpreter that can be used for running Python scripts without having to run Maya itself. For Softimage XSI peeps, XSI’s bundled Python interpreter, in case you didn’t already know, is also located in your installation directory, under /Applications/python, and for 3ds max users, it’s available at the same relative location as well.

A virtual environment is essentially the same thing, except that of course, it’s an actual standard Python interpreter without the additional packages or customized libraries that some of these bundled Python interpreters may have set up specifically for use with their host application.

Why should I care about them?

Let’s paint a little scenario here: you’re working on a 3ds max tool that someone else wrote, to, let’s say, parse some XML scripts that someone had written in order to act as metadata of some sort and return some useful values that the tool can then use to build a rig. For some reason, when your co-worker tests the tool on his workstation, everything runs beautifully, yet when you try to run it on yours, you get ImportErrors up the wazoo. Upon further investigation, you realize that the entire tool was written using the lxml libraries, which are not part of the Python Standard Library. Your co-worker had installed the package into his own 3ds max’s Python installation site-packages directory, and completely forgetten about it as he was developing his tool.

No biggie, you think, you’ll just install that package on your own installation of 3ds max and everything will be good.

Now think: what if instead of just one package, you’ve got a dozen of them? And some of them rely on specific versions? You think I’m kidding, but bringing in a real-world example here, when I was developing Amazon S3 integration for ftrack with 3ds max, for about a year, there was a showstopper issue with using the boto library past 2.28.0 that was only just recently fixed. And not to rag on ftrack or anything, because this sort of version-specific dependency will happen all the time, once your project becomes big enough and complicated enough to start requiring packages from all over the place.

This problem gets worse when you’re not just developing tools for use in host applications, but standalone Python applications as well. What if you already have some packages installed on your computer, but not others? What if the project requires that you install a specific version of a package that conflicts with the one you have installed on your system? And furthermore, think about setting up your development environment in the first place: are you really going to sit there and waste an hour checking against your co-worker’s pip list result in order to figure out which packages you already have and which ones you don’t? And don’t forget: you’ll have to repeat this process each and every time you want to develop/deploy this project on a different workstation, whether at home or at work or on your end-user’s machines.

Virtualenvs solve (nearly) all these problems, and rather elegantly.


Instead of writing, I’m now going to go straight into an example: let’s say that I want to work on a completely new Python project; how would I go about setting up a development environment for it?

Well, first, we’ll need a couple of things. Obviously, we’re going to need virtualenv itself. Fortunately, installation via pip makes this fairly painless:

pip install virtualenv

I also install another package called virtualenvwrapper because it helps make various operations easier. We’ll see why later:

pip install virtualenvwrapper

Once you have these two packages installed, you’re ready to begin creating your own virtual environments!

Making my virtual environment

  1. Go to your project’s root directory (or wherever the top-level Python module entry point is) in a command prompt and create a new virtual environment to work with with the following command:
    virtualenv venv

    You can use any name for the virtual environment that you prefer, I just prefer to use “venv” because it makes sense.

  2. You should see a bunch of things happening, but at the end of the day, once the command has finished running, you should be left with 3 folders in the /venv directory: /Include, /Lib and /Scripts.
  3. You’re done! But now, here’s the real grunt work: making your virtual environment recognizable by your IDE that you’re using for development. For brevity’s sake, I’m only going to cover detailed setup instructions for one of the popular IDEs in use for Python development, namely PyCharm. I’ll talk a little about Python Tools for Visual Studio and Sublime Text setup as well, since those two are used pretty widely as well, but I personally prefer Wing or PyCharm when it comes to Python tools development. (Unless I’m working on mixed-code projects or something)

Using a virtual environment in PyCharm

  1. Go to the Settings page in PyCharm. (File > Settings)
  2. Go to Project > Project interpreter. In the combo box that shows the current project interpreter being used, click the gear icon at the right side, and choose “Add Local”.
  3. Instead of normally navigating to your Maya Python executable like you might normally do, this time, navigate to your newly-created virtual environment instead. Add it, and set it as your project interpreter to use.
  4. Add it, and apply the settings.
  5. Panic because all your modules have suddenly lost their code intelligence and red squiggly lines are appearing everywhere. Fret not, because we’re going to add the paths to the maya completions next, just not through PyCharm’s GUI itself. With virtualenv, we have a better solution: that of site-specific configuration hook files. (This is also why we installed virtualenvwrapper, which gives us access to some convenience commands that do a bunch of boring asinine stuff for us automatically)
  6. In the command prompt, navigate to the /Scripts directory of your /venv directory. Now run the command:

    and you should see your command prompt now have a (venv) prefix in front of the currently working directory, like so:venv_sample

  7. Now run the following command to add any project-specific paths that you needed (such as the Maya completions helpers:
    add2virtualenv "C:\Program Files\Autodesk\Maya2016\devkit\other\pymel\extras\completion\py"

    While we’re at it, why not add the PyCharm debug helper egg files as well for remotely debugging Maya?

    add2virtualenv "C:\Program Files (x86)\JetBrains\PyCharm 4.5\helpers"
    add2virtualenv "C:\Program Files (x86)\JetBrains\PyCharm 4.5\debug-eggs"

    Restart PyCharm, and voila! Your project should have code intelligence working for it again and PyCharm should be able to resolve the symbols being used in Maya-specific commands.

Using a virtual environment in Sublime Text

Before this, you should have already setup your project as a Sublime Project. If not, you should probably read up on how to do so before continuing any further.

After you have your project configured appropriately, all you have to do is add the following entries in your sublime-project file:

            "path": "."
            "path": "C:\\Program Files\\Autodesk\\Maya2016\\devkit\\other\\pymel\\extras\\completion\\py"
        "python_interpreter_path": "$project_path/venv/Scripts/python.exe"

Unfortunately, because (as far as I can tell) Sublime Text does not read the site-specific packages from the virtual environment unless you install additional plugins to help with Python code intelligence (such as JEDI), placing the paths that you want for auto-complete to happen is still required.

Using a virtual environment in Visual Studio with PTVS

Note: I’ve been having issues with virtual environments and PTVS in Visual Studio 2015. For that reason, I would recommend that if you’re planning to do Python development in Visual Studio to stick to the 2013 builds, which work fine as far as my testing goes.

If you’re using PTVS, you can add an existing virtual environment to your project by right clicking the Python Environments and selecting Add Existing Virtual Environment, like so: venv-ptvs
Navigate to your virtual environment’s root directory (not the /Scripts folder) and you should be able to add it automatically to your project.

Unfortunately, however, that’s not the end of the story. Due to the way that PTVS works, in order for Intellisense to work as expected, you may need to manually increase the number of modules that PTVS is allowed to analyze in the background by setting the following registry key:


to something higher. I use 10000; be aware that increasing this value will impact how much memory PTVS will use when inspecting your code.

Because PTVS doesn’t support site-configuration files for virtual environments, you’ll also need to add the paths manually for code completion to function as intended. (You can see why I prefer to use PyCharm for my work)

Synchronizing packages across virtual environments with requirements files

So now we’ve got our virtual environment set up with our IDE of choice, and we’re ready to begin coding. But are we? What about all those packages I was talking about that we might install like lxml?

Well, that’s where requirements files come into play.

Now, before we proceed; there are plenty of ways to do this. You could have a configuration management tool such as Chef handle the setup of the environments for you. You could use a setuptools-based packaging method with a script, complete with manifest file, to handle setting up the environment across different workstations. Hell, you could even write batch/shell scripts to handle installation of these packages for you as well. There are probably dozens of other ways that I don’t know about that could possibly work more elegantly than using simple, text-based requirements files.

However, I’m going to talk about using requirements files here, because:

  1. They’re plaintext, and easy to handle with source control.
  2. They’re plaintext, and they’re simple to read and understand.
  3. They’re plaintext, and it works great with pip without requiring any other sort of dependencies.
  4. They’re plaintext, and simple.

So what is a requirements file? It’s basically a text file that you will be using to tell pip what packages to install, along with version numbers, for your virtual environment.

For example, this is what one of my requirements.txt files looks like for my project:


If I wanted to sync these packages with my virtual environment, all I would need to do is make sure that my source control ignores my /venv folder, just have the requirements.txt file checked into somewhere in source control, and, after activating my virtual environment as detailed above, run the following command to install those packages, with those specific version numbers, but only for that virtual environment:

pip install -r requirements.txt

You can even specify that a package being used should be a specific package or higher, like so:


Or even between versions:


You can take a look at the full list of package specifiers for the syntax required.

See the benefits now? Not only can I ensure that my virtual environment is exactly the same across projects, I can isolate projects from each other with completely different configurations so that I know they are actually functioning the way as intended, and not just because I may have a very specific package installed globally on my system’s Python installation.

What’s more, you can even take this further with some simple shell/batch scripts to automatically generate a development environment, add your required helper paths, activate your virtual environment and use pip to install your other required packages from PyPI; you can then check this into source control for other developers to use.

There’s a ton more that I could talk about virtual environments and the other applications they have for developers, but frankly, that is a whole other post in of itself, and I think I’ve covered what I wanted to do here, which is introduce the concept of reproducible development environments for people who may not be aware that there already exist fairly robust tools for doing so.

Hope this helps! And (if you’re still installing development packages to your host application’s python interpreter directly, please stop; use virtualenvs and site-configuration files instead!)

3 comments on “Start using virtual environments! (instead of mayapy)Add yours →

    1. Hi Mitchell:

      The point is not to get Maya to recognize your virtual environment (though you certainly can replace the Python in Maya with your own virtual environment and Python interpreter); the point being made is that instead of editing using the raw environment that you’ll be working with, you use a virtual environment to emulate the target environment, whether it be Maya, a standalone application, or something else entirely. This way you avoid burdening your target environment with unnecessary dependencies, and you are free to do controlled testing within your virtual environment while knowing that your target environment will remain immutable.

      1. I see, but when it comes to testing Maya specific code that relies on the likes of maya.cmds or pymel, how does this work inside a virtual environment? Wouldn’t you still need to run it inside Maya or mayapy?

Leave a Reply