I am very interested in starting to make some web tools for a project of mine for in my free time.
I was wondering what the different options are for creating web tools and what your experiences are.
The ones I was thinking about were :
html 5
unity
python
php + html + css + javascript
My goal is to be able to easily create prototypes of tools, send these to other people and get direct feedback through metrics.
As I am having no experience with web tools, I was hoping on some input from you guys
I don’t have much/any experience with developing web tools, but I did have a go at initially doing up my flowmap painter as a web tool instead of a standalone download (Unity). The main downside there is that the Unity web player is boxed away, so it doesn’t have any access to your local file system. Not sure if there’s a useful way of circumventing that, but I haven’t really spent much time investigating that route. I’m definitely interested in the web tools side of things, though. If you haven’t had a look at Insomniac’s site, they have some pretty interesting reading there (I believe they’ve moved quite a lot of their tools over to browser-based; pretty fancy, too, what with the engine renderer being stuck in there via plugin).
The sandboxing thing that limits convention JavaScript and most plugins (like Unity) is a pretty significant hurdle for our kind of tools.
If you want to distribute, say, a database viewer or a tool for creating and submitting bug reports it works pretty well; you can probably do most of what you’d want in JavaScript with some decent libraries. You could also look at PyJs or SharpKitso you could write the tools in a python or c# and ‘compile’ them to JS if you don’t like JS. I’m also interested in CoffeeScriptas a Pythonesque JS alternative.
Howeve things are much tougher if you want to do something that actually does significant work to the local file system. You’ll have to get around the local security setup somehow – probably by having the user install a tool as an exe and then talking to it from the browser via some kind of RPC. That works, but it doesn’t seem to really hit the ‘always there, always up to date’ promise of web based tools.
A couple of options you can look at:
C# as the development platform, tools distributed as EXEs using ‘ClickOnce’ deployment. Pros: Very simple deployment, auto-updating, full access to local system. Cons: It’s really just a web-based installer for a conventional app. And only works with IE out of box, need to install plugins for FF or Chrome. <Disclaimer> This is what I do these days </Disclaimer>
Running Jythonin the browser using Java Web Start. Then write your tools in P(j)ython and your GUI with Java Gui tools like swing. Pro: it’s almost python, good gui library and lots of tools, everybody already has java. Con: Web start has a bad rep. Jython ~ python but != python.
Flash. I hear it’s ugly to work with, and I don’t know much about it, but I believe it can be done this way if the tool webpage and the .swf file are ‘Trusted’ by your browser.
RPC, as mentioned above (an example here) Your real ‘tool’ is actually a windows service that runs headlessly on the local machine (you’d need to distribute it with an installer to set it up). Your UI – done in JavaScript, presumably – talks to server, relaying commands and displaying results. The server could be python, an exe, maybe Unity… Pro: does anything, looks like a web app. Con: you’re still on the hook for figuring out how to update the local server, and you code has to be completely decoupled (come to think of it that’s probably a PRO)
we have a ubuntu VM with apache and mysql where we run Python scripts via WSGI. Works very nicely and we don’t have to learn a new language. The VM allows every developer to grab a copy of the server so they don’t have to work on the live machine. On the production machine we run the VM as service.
For programming we stick with: Python, Javascript, HTML and CSS.
We use a custom solution for Ajax calls and a custom UI framework ( 2 modules) which draws HTML elements, forms, and deals with submission via post or get, so developers don’t really have to deal much with that. Everything is simple and light weight and we’ve been using this for over a year now. For RPC we use JSON and http where we call Python WSGI scripts on Apache. Apache handles traffic, authentication (via Active Directory) and it scales nicely. We use this to run a couple of web-based tools here, w.g. the web interface to our QA tools which integrate with Maya and Max, a texture library viewer, a P4 browsing tool, and some more, all written in Python.
We plan to go even further and include some of those tools directly in our Python scripts with QWebKit. There you can intercept clicks on links and develop apps which run on the web and inside Maya and/or another script.
To tell you more about my goal :
I am researching/experimenting for a web tool aimed at making a pipeline more transparent. Like for example, a kid who empties his box of lego to be aware what his building blocks are.
My current idea is to reuse the knowledge we know of game maps, and then trying to reflect this on the problem of visibility of project data.
Think about one big canvas that visually will change depending on the zoom or user-action.
The next goal would be to make users appear like gamers, implement an avatar, show achievements for showing visual progress and teaching the tool, …
So a platform where I can fail/learn quickly would be the thing I am looking for.
Out of personal interest I would like to use python as this would be a perfect opportunity to get more grips with that language.
ps: anyone every used the google docs api with there webtool ?
Robert - Just want to make sure I understand: you’re distributing a VM with apache and your tools (as what - a disk image?) to all your users, and they run local copies of this.
So does your VM let people make changes to the local file system? Could you, for example, use this to open an xml file and edit it and then check it in?
no, the VM is just hosting the servers, which has some advantages over running it on the bare metal. i.e we can easily make a copy of the VM to backup, distribute to another studio, or to just tinker with. Although I find your idea interesting, even though it may not be practical(?).
i see, so it’s a premade server configuration but not for local use. Re-reading your first post it looks like the tools (p4 viewer, tex viewer, etc) seem like things that could run server side instead of of locally, yes?
This bit always drives me nuts - I understand the logic of making browsers unable to interact seriously with the local machine, but it makes the whole web-tools thing so awkard for applications other than database viewers
[QUOTE=Theodox;18870]i see, so it’s a premade server configuration but not for local use. Re-reading your first post it looks like the tools (p4 viewer, tex viewer, etc) seem like things that could run server side instead of of locally, yes?
This bit always drives me nuts - I understand the logic of making browsers unable to interact seriously with the local machine, but it makes the whole web-tools thing so awkard for applications other than database viewers :([/QUOTE]
These tools do run server side.
But here is how to make them interact locally with a Python application:
e.g. your picture library is on http://somemachine.somewhere. Each picture has a link to download it. When you visit this page in a browser all you can do is download pictures - boring! Now in a local app, e.g. a maya script, you could use a QWebview and load that webpage. QWebview allows you to intercept links. So you could, for example, instead of downloading and saving it like in browser, do something else with it. Your Python script could add it to e.g. your selected shader as diffuse texture or whatever else you want to do.
Some other scripts we made don’t have such deep integration. They just share some Python code between the server side Python CGI script and the client side Python Maya script. For example scripts which are run in Maya which gather some statistics about an Object. Instead of having to write the code to save and retrieve the data in two languages (e.g. php and Python) we can do it all in Python. This is mostly about code re-use but also about not having to learn php or perl or whatever.
There’s other stuff you can do with a webserver. E.g. you can do RPC calls from a local client-side Maya script to a server side CGI script and e.g. put your database code on the server and just exchange the results, e.g. JSON encoded. Advantage: you can change the DB code without having to update all the client side scripts. You don’t have to worry about installing any more 3rd party modules and re-compiling them for e.g. Maya 2013, because it all happens on the server anyway.
[QUOTE=Nysuatro;18875]Can you guys point me to “must know knowledge” for creating web applications?
In your replies I read terms like :
Web Server Gateway Interface
Remote procedure call
As this is quite new for me, I am trying to find out what the foundation of knowledge is for these kind of tools.[/QUOTE]
That’s a tough one. It depends a LOT one what you really need to achieve as in what exactly is “these kinds of tools”. For our production tracking system we are using Python with Django for the back-end and ExtJS for most of the web-based front-end. In addition there are tools talking to diverse interfaces of the back-end to have local GUIs or various things like time-tracking. RPC (and/or REST interfaces) allow for “easy” integration of server-side back-ends with local UIs (e.g. integrated into DCC Applications). WSGI is a Python-specific specification that defines a standard for communication between application frameworks and a server (usually a web-server). Django is WSGI compatible, which means it can be directly deployed within any server environment supporting WSGI.
For web-based tools there are so many different routes, languages, tools, technology…so many buzzwords…so many different camps, that are almost religious about their approach in some cases, that i find it next to impossible to point towards any of them.
We chose python because that allows potential reuse of code as python is available in all of our DCC apps and is our scripting language of choice. This also means that RPC is dead simple usually.
We chose django, simply because it fitted our needs and is a very nice python based web framework.
We chose ExtJS because we have some very special needs for some of our planning views that needed really sophisticated widgets (we are also using ExtScheduler on top).
We chose to do parts of the system as local apps (python, turned into “native” tools by packaging using py2exe or alike) because we needed that for specific functionality that is not possible using a browser.
So we first defined our needs (and possible future needs!) and then evaluated different possibilities to find the best set of tools and technology to match that.
I would start with trying to understand basic CGI scripts. A CGI script is like a command line program called by the web server (e.g. Apache) - you call it with some parameters and get some stuff (usually a html page) back and then the program is finished. CGI is a method how the webserver (e.g. Apache) executes your Python/Perl/etc program. CGI is the most basic way to interact with a web based program. Concepts you will encounter: CGI, POST and GET data submission, HTML headers, MIME types. You want to look into Python’s built in modules cgi and cgitb.
RPC: E.g. you could call http://myserver/myscript.py?say=helloworld. And then you get a html webpage back saying “hello world”.
Now you don’t have to return a html web page. You can return any mime-data: html, JSON, binary. This is like calling a procedure - it’s got a name, parameters and you get something back. The fact that the procedure is a script on a webserver and not a function in your Python program makes it a remote function. myscript is the program, say is your procedure/function and helloworld is the parameter. The webpage you get back is the result of this remote call.
For RPC there’s a couple of ways to encode the data you pass to the function you call and how to get it back. There’s XML based methods, JSON based ones, etc. In our implementation as work we pass the data separately fromt the URL (unlike the example above) and encode it via JSON-RPC. We also encode the return values based on the JSON-RPC convention.
To use this in Python there’s a built in module called “json”. To send http requests and retrieve results use urllib and urllib2, which are also default Python modules.
WSGI is another method how your webserver executes your Python script. WSGI is like CGI except the Python script doesn’t just start - execute - terminate like in CGI. WSGI keeps the Python script and the Python interpreter in memory and running (for a certain amount of time) after the request has been dealt with instead of executing the Python script for every CGI call again. This helps a lot with speed, but it’s not something I would look into right away if you’re new to all this. I recommend understanding how the classic and slow CGI way works first.
Once you know the foundations you can start thinking of frameworks that will fit your specific needs - it too really depends what you want to do. Just use RPC? Or use Python to generate html based applications which run in a browser? What sort of traffic do you expect - bandwidth wise and concurrent access wise? How interactive your html based applications will be, if it makes sense to look into Ajax and stuff like e.g. jQuery?
[QUOTE=RobertKist;18878]I would start with trying to understand basic CGI scripts. [/QUOTE]
+1
also, once you do web tools you will find out how much a pain in the ass it is to develop web tools compared to stand alone tools, and you will run from web tools. lol.