My new role in my company has meant that I no longer edit daily. Instead, I assess new technology, troubleshoot problems, handle training for new users, and test existing workflows for issues and propose improvements or solutions.
It’s great that my primary role is now what I used to do in my free time at work. My obsession with saving time by refining workflow, organization & processes in the first place was to buy myself more time to put better shows together but also to keep up to date with new technology and toys in media production.
What I can say is that the numerous moving parts and challenging requirements of my company truly keep me in a constant position of having to learn more every day about systems and platforms I previously knew little or nothing about.
Obviously because I’m in a position where I’m expected to know most of these things and make recommendations based on knowledge & experience, this can occasionally be challenging. But what’s becoming clear is that technology, consumer expectations and Moore’s Law are conspiring to create a perfect storm where your knowledge attained upon graduating school may be redundant by the time you land a job. The platform you learn may be EOL (End Of Life) by the time you truly master it, FCP 7 being the most current example. The content you thought was exactly what your viewers wanted, proves to be exactly that, what they WANTED then but not any more.
So in this landscape, no one is an omniscient expert, or at least not for very long. Who knows if the MacPro will even exist a year from now? Who knows if Mountain Lion will be a viable OS for editing on, or if linux & solaris become the choice of the future? Lightworks has already released a limited alpha of their software for linux and OpenEXR, Cinepaint, Blender , Nuke, Baselight, Softimage and Maya are all linux based applications.
This uncertainty is scary, but also very liberating. In days gone by, there were serious problems in Singapore with the apprenticeship system. Some experienced editors, directors, cinematographers, writers, gaffers, grips, CAs, PAs and producers were reluctant to teach their juniors properly, lest they replace them. The “budget first, race to the bottom” attitude of pricing certainly lent credence to that concern.
What that created was a glass ceiling where junior or younger media professionals were consistently undermined, mis-taught and prevented from reaching their true potential by experienced pros protecting their rice bowl.
This democratization of tools has its benefits and downsides. The cost of producing content has decreased. Unfortunately, that is public knowledge and commonly used as an argument for ever shrinking budgets. This ignores the fact that expertise, experience and taste still needs to be developed and cultivated for great work. FCPX at $299 isn’t going to change that.
Lindsey Stirling was eliminated at the quarter finals of America’s Got Talent, but her collaboration with Cinematographer Devin Graham propelled her Youtube channel to stratospheric heights, resulting in her launching her album on itunes. The album debuted at 81 on the Billboard 200 & topped the Dance/Electronic Albums chart in the US.
Long story short, times are changing very fast. Things are less secure, but also a lot more interesting, so strap in for the ride and don’t be afraid to take risks. You’re more likely to lose out by missing the boat and over-exaggerate the effects of mis-steps along the way. Take a chance, everyone is in some way or another.
A lot of bile and vitriol has been spilt over the last few days over the launch of FCPx, the sudden freeze on FCP7 and has left many editors who work on FCP in broadcast and film for a living at a crossroads.
What’s the big deal? FCP 7 still works, just use that.
Not that simple I’m afraid. FCS3 and by extension FCP7 are being pulled off store shelves and from Value Added Resellers. If you don’t already have FCP7, you’re too late, unless you are buying it off the black market or eBay.
You ‘pros’ are just afraid of change
That may be the case for some. But anyone who was already editing in 1999 and is using FCP now can’t be afraid of change. Because back in ‘99, FCP was the change that people were afraid of. It was simple, had no realtime capabilities and was very buggy.
But it was also significantly cheaper and easier to use than anything else in the market. Combined with DV, FCP played a big part in democratizing the tools of motion picture editing, without which many companies and professionals in the industry today would have never gotten their start due to the competitive price point.
Anyone who thinks this is a bunch of grumpy old geriatrics complaining about scary new interfaces that they don’t get, has never worked in a collaborative broadcast television and film environment, where time, collaboration and professionalism are paramount in getting the job done, and the tools need to be up to the job.
It’s not about snobbery about wanting to feel like FCPx is for ‘true pros’. I like range keywords. I’d like the option to toggle magnetic timeline on and off. I’ve always thought face detection was long overdue for an NLE. Autofixing rolling shutter and proxy edit until ingest is done are great features.
So why do ‘pro’ users have such a big problem with FCPx?
A lot of talk has phrased FCPx as having ‘left features out’ and considering this is branded as FCP ten, users can hardly be based for seeing FCPx as an upgrade to FCP7.
The truth is FCPx is a rewrite from scratch, and many features are ‘not written yet’
Quote from Ron Brinkman former developer of Apple’s Shake:So if you’re really a professional you shouldn’t want to be reliant on software from a company like Apple. Because your heart will be broken. Because they’re not reliant on you. …. find yourself a software provider whose life-blood flows only as long as they keep their professional customers happy. It only makes sense.
Former FCP team developer (2002-2008) : The pro market is too small for Apple to care about it. Instead of trying to get hundreds or even thousands of video professionals to buy new Macs, they can nail the pro-sumer market and sell to hundreds of thousands of hobbyists like me.
For smaller facilities and boutique production houses who have the flexibility to write off 4-5 edit suites and push on, apple pulling the rug out under them represents an inconvenience, but one they can recover from fairly comfortably.
For larger sized companies, the stakes and significantly higher infrastructure costs mean placing themselves at the whims of a single company (which is in turn beholden to the whims of a larger than life megalomaniac) that has a track record of ‘taking their ball home and refusing to play’ are downright frightening, considering the significantly larger cost outlay.
With a 32bit FCP7 reaching the limits of it’s capability in a few years, editors who have recently made the recommendation to use FCP as the main editor in their facilities have been thrown under the bus in 2 ways.
First, that FCPx will not in the near future be a viable upgrade path for large facilities requiring collaborative workflows. Any conjecture about features that will be added is just that, conjecture. Without any firm release dates, all the assurances in the world aren’t going to satisfy a CFO, MD or CTO out for blood.
Second, FCP7 will not be supported further, as recent actions regarding licensing and availability of upgrade packages show. (I know they are now back on the website)
This means being tied to a platform that will be obsolete in 1-2 years, with no concrete knowledge of whether the next incarnation (FCPx) will be ready for use within/before that time.
You’re talking about companies who might have spent tens or hundreds of millions of dollars on hardware, software, plugins and integration to Media Asset Managers, News systems, Transmission systems, Playout servers.
Companies which will now hold the person/people who recommended FCP as an editing platform to them responsible for the additional cost of switching, re-training, re-integrating, writing off obsolete/incompatible hardware & software.
This affects all editors regardless of company size, smaller boutique production houses still need the ability to share projects and files to audio mixing houses, post-production facilities, voiceover studios, etc, because they don’t have the resources in house. The large facilities have these resources in-house, but the tools and project transport conduits and formats remain the same and need to remain supported.
So what’s missing?
Lack of support for FCP7 projects and no intention of adding that means as far as Apple is concerned, all the business you did before FCPx is not important.
If your client comes back to ask you for a change in their project or to refresh their video with their new logo, it’s their fault that they started doing business with you before you switched to FCPx. I’m surprised Macs around the world haven’t started at the year 0 after June 21st 2011.
There is no physical install copy. You must purchase it on the App Store and download it online. No local copy is kept and the files are cleared once the installation is completed.
How modern and forward thinking. Except if you are on location editing your rushes on safari in Africa and your installation gets corrupted. Where are you going to get internet from to re-download the application and re-install it?
No broadcast quality video out.
I can’t even muster up the sarcasm for this one. How does an application with Pro in its name leave out such a basic feature, one that is an absolute necessity for confidence in the fidelity of the finished product?
Extremely basic tape workflow.
I don’t like tape. But archive footage will come on tape. TV stations still prefer tape. If you have the tape deck, baseband ingest of the tape is codec agnostic and the tape is a handy backup copy of the ingested file.
No multicam support
Reality shows and multicam studio shows don’t matter? Apple have promised it will be fixed, but to ship without it and claim making v cuts and disabling tracks as a solution is plain ignorance.
Inability to manually assign video and audio layers to tracks
New editing paradigm, I get it. But if I’m handing over a complex multi layer sequence to another person, it helps if I can tell them all full frame graphic transitions are on track 5, all lower 3rds on track 4, all bugs on track 3, etc. If all these layers are automatically assigned, only the person who originally edited and composed the multiple layers will understand what’s going on.
There are numerous other issues I have not mentioned, because I think they’ll be fixed at some point.
The point is that Apple is not going to fix everything. It’s only going to bother fixing the things that their new target market wants.
Apple’s new target market
Do you not see what the fuss is about? Does FCPx rock your world? Congrats. You are Apple’s new blue eyed boy.
There are a few hundred thousand FCP7 users complaining that it doesn’t work for them, yet. There are millions of people who will find FCPx makes the daunting task of editing video much easier for them. And they’d be right.
They won’t care that magnetic timeline takes away manual track assignment from them. Why would they want manual track assignment over not having to worry about audio sync?
Why would they care that xml/aaf/omf/edl are not supported? Autofix on ingest for rolling shutter and audio are much more useful to them.
I’m not saying FCPx isn’t a good tool for someone to edit video with. I’m just pointing out that this ’someone’ isn’t a professional broadcast or film editor anymore.
If FCPx works for you, more power to you, enjoy the ride and welcome to the world of motion picture editing. Just a word of warning, if you one day rely a piece of professional Apple software for a living, remember this week and how Apple treats its professional product users.
On a mac, rsync is already built in, so the core functionality is already in the system.
There are a number of GUI frontends around that help make this more user friendly, but most of them make you pay for what is essentially a juiced up copy and file difference comparison tool. Unless the sync frontend offers advanced features like complex rule based scheduling, watch folder triggering and versioning, you are getting ripped off.
Just use terminal, I’m not a linux geek but it’s not really as hard as it seems.
the command is rsync -avr --progress --exclude '*Render Files*' --delete /your/source/directory/ /your/destination/directory/
I’ll break it down for you
is the command that is being run
are switches that specify how the command works
a= archive (so files are copied with the same settings/permissions on the destination as the source)
v= verbose (so you have detailed error messages if any)
r= recursive (so files within folders are also copied)
so you know the status of the backup and which file is currently being copied
so you can exclude folders that don’t need to be backed up
because folders like Audio Render Files and Render Files have a space between them, we enclose them within ‘ so that the command does not interpret the space as a cue to look for the next “switch”.
The * is a wildcard operand which means all files and folders which contain the phrase ‘Render Files’ will be ignored, regardless of what is behind or in front of the phrase itself. The location of the * indicates whether the wildcard applies before or after the phrase.
‘*Render Files’ will select Audio Render Files but not Render Files for GFX
‘Render Files*’ will select Render Files for GFX but not Audio Render Files
This makes the backup a direct mirror of the source at the time of the backup. For example your source contains 2 folders, Project Runway and Drag Race. You finish the Project Runway project and trash the folder.
if --delete is enabled, the backup will also delete Project Runway from the destination, hence the term direct mirror; your files lost on the source will also disappear from the backup. If you want it to retain files, then do not include --delete.
Bear in mind that if --delete is not enabled, if you move files around on your source in between backups you may end up with multiple copies of the same files in their old and new locations.
For example, you had your still files together with your video files when you first backed up, and you’ve now organized them into a folder. If --delete is not enabled, your backup will have the stills together with the video files as per the first backup, AND the new folder with the stills that you organized; you would have duplicates of the same stills in two locations.
That can get messy, so to be safe, rsync without --delete, but when your source folder is organized exactly the way you want, add --delete so the correct/organized version is mirrored and the extra/misplaced files are deleted.
An easier way to get your directory locations into the terminal is to drag the folder into the terminal, the full address of the folder will show up.
It will look like this /Volumes/YOUR_HD_NAME/yourfolders/
It makes a difference whether there is a / at the end of the folder name or not. Test this command on a folder with a few text files to see what happens and get a feel of it before you use it on your large video files.
You can also download Cronnix which can help you schedule this to run automatically based on time triggers.
You can also backup to another machine on the network, but the instructions for that delve into ssh-key-authentication. I can provide the instructions if you want those, just leave a comment if that is the case.
OSX v Windows
Avid vs FCP
Wii vs Xbox 360 vs PS3
HD-DVD vs Bluray
AMD vs Intel
ATI vs nVidia
FireWire vs USB
Betamax vs VHS
Is there anything that polarizes opinion more these black and white arguments?
I will not pretend to be an expert on all of the topics above, many of them are difficult to grasp, if I do make any misrepresentations or errors, I do apologize and invite people more knowledgeable on the topics to correct me.
What’s interesting is that while most people do recognize that the superior technology doesn’t always win (as with Betamax), the arguments tend to circle around just that, partially because, as people interested in technology, they have a bias towards data that is verifiable, irrefutable and accurate.
This ignores the external factors which are arguably even more influential in the outcome than the sheer technological specifications.
Again, my recollection of these events are hazy and while I have done some research to substantiate this post, I honestly don’t have the time to fact check it completely, and the inaccuracies if they exist, do not affect the conclusions I have drawn. This is an opinion piece, not a dissertation.
You may draw completely contrary conclusions as is your right and feel free to disagree with me. It would be my pleasure to discuss further with you in the comments.
Case study 1: AMD Athlon vs Intel Pentium 3
Context In 1999, AMD and Intel went head to head in the processor wars. Prior to this, AMD’s K5 line of processors had been rightly written off as word-processor also-rans together with Cyrix (remember them?). The K6 series closed the gap, but the K7 series is where it gets really interesting.
Who had the better technology? With the launch of the Athlon, AMD leapfrogged into the computer processor performance lead for the first time.
Processor clock speed was the key marketing number of that era. AMD’s new K7 series, named Athlon, was equal to Intel in that regard, but their dual front side bus with DDR technology (Double Data Rate, not the dancing game) 180 nanometer Process & 3D Now! Floating Point Unit meant that an AMD Athlon clocked at 600MHz (don’t laugh, this was 1999) would outperform an Intelprocessor clocked at the same speed, by performing more instructions per clock.
If you think of 600MHz as a clock that ticks 600,000 times a second, it is easy to see how a small lead in instructions per clock amplified by the clockspeed can translate into significant performance gains.
Intel at that point, were having problems transitioning to the 180 nanometer process, were experiencing material shortages and fell behind in the performance stakes.
The Athlon Thunderbird series was also popular for it’s overclocking abilities, matching the ‘overclockability’ of the venerable Mendocino Celeron 300A, and could be overclocked by connecting the L1 bridges of the chip by shading a conductive path with a graphite pencil, much easier than fiddling with jumpers with the Celeron.
Overclocking takes advantage of the fact that most processors are physically capable of performing at higher clockspeeds than they are specified at. As manufacturing processes improve, yields are so high that often the budget segment processors come from the same manufacturing batch as the mid range processors. They are then underclocked to cater to the low/budget market segment.
How did they lose? Marketing. Consumers were conditioned by Intel’s marketing that the performance of a processor was defined by its clockspeed, which was true for a time. However, when AMD overhauled this by offering superior performance at equal clockspeed, they failed to educate the market of this.
Worse, they tried to represent their superior performance on Intel’s terms. For their XP series, they would name their processor clocked at 1600mHz the Athlon XP 1900+, to represent that it should be regarded as an equal to Intel’s processor at 1900MHz.
There are 2 problems with this. First, why allow name your products based on your rival’s nomenclature and lend credence to the idea that MHz means everything when that misconception hurts your product?
Secondly, any layman with a smidgen of diligence who sees 1900+ instead of 1900MHz on a retail box will wonder what the “real MHz” is, because that’s the number the consumer has been told to look for by the market leader. If you know nothing of the technology and see something labelled 1900+, and upon closer inspection see it is actually 1600MHz, how do you feel?
Cheated. Would you ever trust a company that made you feel cheated?
AMD retained decent marketshare during that period, mostly buoyed by embarrassingly geeky kids like myself looking to get the most bang for buck for our gaming systems, but really, they should have been capitalizing on their temporary technological superiority, and consolidating that position. Unfortunately for everyone but Intel, they did not.
What happened in the end? Intel correctly targeted the growing mobile segment and were first to market with the Core Duo with Centrino. Low power consumption, not performance was the new black. Intel’s significant lead in the mobile market coincided with an unprecedented boom in demand for mobile computing, cementing their position as market leaders.
Now, as the larger company, Intel can afford to take risks with their design approaches and directions, which gives them higher odds at making a breakthrough in technology. These are risks AMD can no longer afford to take. In poker terms, they’re short stacked on their way to being blinded out, while waiting for the nuts to double up.
That also allows Intel the resources, if they wish, to wage price wars which make no economic sense in the short term, but pay off in the long term if they put AMD out of business, gifting them what would effectively be a monopoly for the computer processor market.
Worse still, AMD’s acquisition of ATI, means it they go out of business, 2 monopolies will be created, Intel benefiting in the processor space and nVidia in the graphic card market.
Firewire vs USB.
Editing professionals swear by firewire. In the early years of USB, firewire was an oasis of constant throughput, as any drive with a USB interface was spat at as devil incarnate and loathesome paperweight. Is USB still that bad? What about USB3? e-SATA? Gigabit? We’ve seen the future and it’s one where banks won’t loan us the money to buy Fibre Channel.
Even though the screening facilities were not geared up for proper 2D 4K projection, the richness of the image completely blows you away. The cinema we were using was the only one equipped with a 4K projector but its primary use as a stereoscopic 3D theatre meant that it was using a silver screen, not ideal for 2D images as it does not give a proper rendition of the black levels.
The opening shot of the new MysteriumX sensor showreel was dim, shot at ASA2000.
The image had little visible noise, looked like ASA 640 at worst.
Leo DiCaprio in the shot is sitting on the floor lighting a cigar. When he lights the match, THE FLAME IS ENOUGH TO ILLUMINATE HIS FACE!
See that video here. No grading, encoded in Prores, so if you don’t have the codec installed, don’t bother downloading it, you won’t be able to view it. This pales in comparison to the 4K version, but it’s the only way you can get a meagre approximation of the dynamic range and noise levels of the image. The picture has tonal range and subtlety that even ProRes struggles to do justice to.
Cynics may say the lighting was augmented, but sure didn’t look like it, the flickering and light fluctuations were true to physics.
Once I saw the specs stated that overcranking was possible without downrezing, the lightbulb in my head went off. For the longest time I was trying to achieve HDR in post by getting Ian from Widescreen Media to shoot 2 identical shots at different exposures with the beam splitter rigs he uses for stereoscopic work, then trying to blend these in post. Red’s implementation is far superior and seamless.
HDR is done by overcranking and exposing alternate frames at the 2 different exposure settings, and the frames are linked by metadata. I believe their term for it was conjoined frames. The cost is only increased filesize (by implication, higher throughput requirements?).
If I’m not mistaken, the filesize increase is not doubled, so there is likely to be some form of lossless/highpass/lowpass compression to save space/datarate.
eg you’re taking the over-exposed frame for shadow detail, you probably don’t need to read/encode highlights. All conjecture on my part so it would be interesting to know how it’s done.
There is an easy mode that tonemaps/blends the 2 images, or a more advanced mode that gives you more control over the blending settings, with adjustable response curves.
With a theoretical 13 stop latitude, and assuming a 4 stop overlap between the exposures, a dynamic range of 22 stops is possible. The shots on the reel looked like they overlapped conservatively by 7-9 stops (15-17 stop latitude), so I’m wondering how far it can be pushed. Bear in mind I watched it only once, and was busy picking my jaw up off the floor so my dynamic range estimations should be taken with a shovel of salt.
All in all mind blowing stuff. This changes everything, again. But without the snarky ads and hipper than thou attitude. Just guys who love their cameras, giving you the best camera technologically possible, NOW and CHEAP.
There are times you wish you could teleport to your office to make that small change and render the sequence, burn the dvd, or export to the ftp.
Or, you might have set the project to render and headed out for dinner hoping to return to see it done, only to realize the NLE crashed 2 minutes in, leaving you with another 2 hour wait before you can do what you wanted to do.
These days there are many tools which might be able to help you do those steps automatically upon completion but what if the program crashes, or someone shuts down your machine, and those steps do not complete?
Sometimes, you just need to be able to see that it went through alright so you can have a restful night of sleep knowing that the work is done, rather than anticipating a disaster waiting for you in the morning.
Essentially, there are times you need to do something at your edit station, but it’s such a minor step that the amount of time you would take to go back to the office just to do it seems like a huge waste of time.
This will help you do that so you can be more productive and not waste time waiting around watching a render bar or waiting for a long file transfer.
Either way, there are 2 very important acronyms you need to know about to make your life that much easier.
A Virtual Private Network is a computer network which adds an additional software layer over an existing network for the purposes of establishing a secure connection for communication across an insecure network.
In this case, the insecure network is the internet, and the Virtual Private Network we will be trying to establish between the VPN Host (The computer being controlled) and the VPN Client (The computer you will be using to control the VPN remote host).
By using a VPN, we will not only be able to connect to the office network securely as if we are physically there, but with the use of VNC, control the VNC Server with the keyboard and mouse on the VPN Client.
For the purposes of this article, we are using Hamachi, a zero-configuration VPN. It was originally open source before it was bought by LogMeIn. Since then, development for the Mac OSX, and linux versions have stopped.
There is an unofficial frontend for the Mac, which is HamachiX. I will demonstrate how you can use this, but also show you the command line interface as the frontend is buggy and sometimes crashes.
Virtual Network Computing allows you to control the mouse and keyboard of a remote computer using your own keyboard and mouse, as if you were sitting at that machine.
This can be very useful within the large offices to allow you to control the machine in the server room or for the technical department to troubleshoot problems without physically having to be at the machine.
What a remote machine looks like in Mac OSX Leopard’s Screen Sharing app.
Why must we use VPN and VNC?
What we are trying to do is to extend the VNC functionality to computers which are not on the same network in the physical confines of the office.
A VPN is one of the means available to establish this connection, which allows computers which are not physically connected to the office network, to connect to the VNC Host as if they are, requiring only normal internet connectivity.
Configure your VNC server
This needs to be done on the machine which you want to control remotely.
Leopard/Snow Leopard has a VNC server built in.
Open System Preferences > Sharing.
Click Remote Management, then Computer Settings…
Select as shown.
Go back to the Remote Management page and Allow access for: Only these users
Add the users you want to give remote access privileges to with the + icon.
If you’re on Tiger, use Vine Server. It works in a similar fashion.
First, install Vine Server.
Then open the application.
Enter your desired password. You can leave the other values at their default.
When you start your server, this status screen will show you the IP Addresses and the port through which you can connect to the server.
If you want to ensure that you will be able to connect to your server, you should not allow the machine to sleep.
Allow multiple VNC connections so failed sessions do not tie up your machine from being controlled.
Click the System Server button to make Vine Server automatically start at boot.
This is the configuration menu for the System Server. Requiring SSH is a more secure option of controlling your server.
The Server is configured! Now how do I control it?
If you’re on Leopard/Snow Leopard, you have /System/Library/CoreServices/Screen Sharing.app.
If bonjour is functioning properly, you should also be able to see the server on the finder window and click Share Screen.
If not, just launch Screen Sharing and enter the Hamachi IP of the computer you want to control. Remember that it must have Sharing and Remote Management set up before you can control it.
It works as well, but Screening Sharing has a more efficient way of compressing the data stream and is much more responsive.
Be a Control Freak
Now you should be able to control your server within your own network by accessing its internal IP.
You can check this by opening /Applications/Utilities/System Profiler.app on your VNC Server.
Go to the network page and look at the column named IPv4 Addresses.
You should test if the VNC server works within your own network before trying to setup your VPN for external connections to the server.
Externalize your Control Urges
Firstly, it is much simpler to connect to your office network if you are using a static IP Address.
This means that your IP will never change and you can simply type this address into Screen Sharing.app from anywhere and it will connect to the server.
Static IP addresses normally cost more, most people are on dynamically assigned IP Addresses, which the Internet Service Providers rotate regularly, so you do not always have the same IP Address.
How do I check my IP?
You can find the external IP assigned to you by your ISP by using this online IP detection tool.
Use the java applet to find your real IP address, the initial result may give you your ISP’s proxy server address instead.
Loopware also provides a useful tool which resides in your Server’s Menubar.
This shows you all the IPs your computer is assigned, including the hamachi IP which you will need to know to remotely access your server via the hamachi VPN.
How do I know if my IP is dynamically assigned?
The most definitive way is to ask your ISP. The more cumbersome way would be to first take note of your current external IP, reset your ADSL/Cable modem and reconnect it a few minutes later.
Check your external IP again to see if it is different. If it is, you are probably on a dynamically assigned IP.
However, do not take it for granted that you are on a static IP, checking with your ISP is the more accurate way of ascertaining that.
My IP is dynamically assigned, so how do I connect to my server if the IP keeps changing?
There are a number of VPN solutions on the market. I use hamachi because it runs on Mac OSX, windows and linux, and is free.
I cannot vouch for the effectiveness of other solutions.
The role that hamachi provides is Network Address Traversal, which allows it to tunnel through routers and firewalls, as well as its mediation server, which determines the respective IPs of the server and client.
It is not the official release, but it comes with a graphical frontend that makes it more user friendly for non commandline users.
I use the commandline with hamachi but HamachiX makes installing the tap/tun drivers easier, so this is my recommendation for users who are not comfortable with the command line interface.
HamachiX installation instructions
After installing HamachiX, open the application.
You will not be able to connect to any networks yet because the system components which operate behind Hamachix’s graphical interface have not been installed yet.
To install those, click Help > System Support > Install system components.
Then, click Help > System Support > Reset hamachi background process
Quit HamachiX and start the application again.
Configure Hamachi Network
If you have a windows machine available, use this machine to create your hamachi networks.
The windows version of hamachi has more features and allows you to manage your networks and the members of those networks through a web interface.
Update: Hamachi version 2 has been inconsistent with its ability to connect to linux and mac clients. Use the old version of hamachi instead.
This is the hamachi windows interface, very similar to an IM client. The clients are categorized by their network names.
To join or create networks, use the Network drop down menu.
This is the interface through which you create your network. Do not forget your password. There is no password recovery tool. Managed networks are a new feature in Hamachi V2 which allow for web-based administration. Your mileage might vary on their interoperability with Mac and linux clients.
Network names are case-sensitive, so bear that in mind when you create and distribute the details.
This is the context menu available when you right click on a user’s name.
The mac and linux versions of hamachi are v 0.9xx and are unable to re-assign network ownership to other users.
This becomes problematic if the computer which created the network is damaged/reinstalled/formatted/sold/stolen.
What would happen is that you would have a network where people can join (if they know the password), but you would be unable to evict or ban users as you would not have ownership of the network.
This can be a security risk if you give your password out freely to part-timers, freelancers or vengeful ex employees.
This is the reason I recommend you create your networks on a windows machine. I have a VMware Fusion Virtual machine exclusively for this purpose.
Instructions for setting up your networks using HamachiX (if you do not have a windows machine)
To set up your user account on the hamachi network, first login to the Hamachi network.
Then, open your preferences and set up your Nickname.
Click the Add icon to add a new network.
After this, create a unique Network Name. This is case-sensitive so remember to take note of that when distributing the details to users.
I recommend you setup one network for employee access only, and another for clients.
You might also want another for external vendors or tech support, or another for directors of the company depending on how many tiers of security you have.
My recommendation is to keep it simple and have as many as you need but no more than that.
Administration becomes increasingly complex once you add too many layers of complexity and hierarchy that are unnecessary.
The reason you might want to configure hamachi networks for external parties like clients and vendors is that if you configure your client’s computers, you can remotely access their computer with their permission to show them how to fix a problem like installing codecs or upgrading their version of quicktime if they don’t know how to do it themselves.
A separate network for your tech support allows them to remotely help you troubleshoot your systems or make qualitative assessments before they decide if they need to be onsite & save yourself from incurring unnecessary transport charges.
Obviously, the reason these networks are separate is for security. Be judicious about the clients you connect on the same network.
If they are competitors, don’t put them together, otherwise you could well be facilitating industrial espionage.
Generally, I recommend adding only your absolutely most important clients, the ones that represent more than 40% of your business and whom you trust implicitly. I don’t want to have to start Hamachi manually. Can I make this do it automatically when OS X boots?
To install the hamachi boot scripts, download this file.
This file is provided by faib who re-wrote this hamachi daemon installation tutorial by SilveRo. Feel free to make a donation to him if this script is useful to you, it certainly was for me!
Navigate to the directory where you downloaded the file.
To do this, type cd
Put a space at the end of the ‘cd’ and click and drag the folder the file is located in on the Finder into the Terminal window then hit enter.
From here you can enter the rest of these commands by copying and pasting them into the terminal.
sudo cp hamachi-boot-macosx.tar.gz /Library/StartupItems
sudo tar zxvf hamachi-boot-macosx.tar.gz
sudo chown -R root:wheel hamachi
It is best to copy the files you are editing to the desktop or another folder, as you will be unable to save the file if you edit it directly. This is because the /Library/StartupItems/ folder has permissions set which prevent you from modifying files directly. The workaround is to copy them to a folder you have permissions for, making the changes, then moving them into /Library/StartupItems/hamachi/.
So copy these files to your desktop, so you can make changes to them, then when you’ve made the changes and saved them, copy these files back into /Library/StartupItems/hamachi/, OS X will then prompt you for your password and allow you to overwrite the file. If you make any mistakes, just delete the folder /Library/StartupItems/hamachi/ and start over.
The changes to make
Open hamachi_helper with TextEdit, and edit the beginning of hamachi_helper, replacing “hamachi_account” with the User Account Hamachi was installed to.
You can check by clicking the Apple Icon on the top left of your menu bar and checking the User Account after the words Log Out.
If you installed Hamachi as root, I believe the script will work if you set HAMACHI_OWNER=root and HAMACHI_DIR=/var/root/.hamachi.
After hamachi_helper works, change hamachi_networks.conf to contain the names of the networks you would like to sign on to. One network name per line, as many lines as you want. As far as I know there is no hard limit.
This package is designed to be run by SystemStarter during the boot process. However, you can test it manually by entering commands in the Terminal:
Test if this works by pinging the server’s hamachi IP, then restarting your server.
You can do this by opening Terminal and entering the following command. (All hamachi IP start with 5, replace xxx with the correct numbers)
Can I control it from my iPhone?
Now you can!
Jaadu VNC app has been my favorite iPhone VNC app since it was released and v3.0 ups the ante by allowing you to connect to your server even if you are not connected to the same wireless network.
You need to install the Jaadu VNC connect software on your Server.
Allow Jaadu VNC Connect to run as a service so it will automatically start on boot.
Enter your google credentials on the Jaadu VNC Connect dialog.
This is what the drop down menu from the MenuBar should look like if you’ve connected successfully.
Jaadu VNC Connect uses google as a DNS updater to negotiate a connection between your iPhone and your server.
Whenever you are logged into your gmail on 2 different computers, you will see a notification on the bottom of the screen indicating other computers have this same account open, listing the IP Addresses of the other computers.
This is likely to be the API facilitating the connection.
Show me the money! How do you control it from the iPhone?
Install Jaadu VNC Version 3.
This is what the application looks like. This is the Manual Connection Tab. You can manually add IP Addresses or DNS names here.
The Discovered Tab shows servers automatically discovered by Bonjour.
This is the Internet Tab. All servers with Jaadu VNC Connect installed and logged in to the same google account are available here.
The tab will list all online servers logged in to the google account.
Screen is loading. Take note that I am on 3G, so the loading is significantly slower than on Wireless G.
These are the soft keyboards available on Jaadu VNC.
These are the connection settings.
Hamachi did not work for me! Is there another way?
You need to be setup your server to update your IP to a dynamic DNS server. This server resolves your dynamic IP to an address that you choose.
This allows the client to make a connection to your dynamically assigned IP. However, if you have a router, you need to configure it to forward the connection to the correct computer in the office.
This might involve configuring a custom port on your Server and setting your router to forward requests to that specific port to the Server.
This method allows you to connect to your server even if you have a dynamic IP, but is less secure than an end to end VPN connection, due to the open nature of the connection.
I will not go into detail about how to setup vnc using a dynamic dns service and router port forwarding.
If you do not know how to configure custom ports for your servers and forward ports in your router, you probably also do not understand the security implications of allowing vnc access over an unsecured WAN connection.
I hope this tutorial has released you from the shackles of the edit bay and you no longer need to spend wasted hours watching a render bar.