Well, What Now? Ideas on the Kinect
We are now just about 2 months into the new console cycle, so a lot about the new consoles is a bit new to rate. Sony and Microsoft are currently awaiting Gakai and Twitch respectively, as well as other things.
That said, Microsoft is still lacking in the features department for its handy-dandy peripheral, the Kinect. While many features built into the console are useful enough, the integration within games is still minimal. In some cases, it still can hinder gameplay, to make matters worse. The Microsoft exclusive, Titanfall, won’t even be using Kinect functionality to add to gameplay.
But fear not, there’s many ways that the Kinect can still be used going forward. We know some of what Microsoft boasts the machine can do, and the ideas are endless. In this article, I’ll list some of the ideas I’d like to see in a game or two in the future.
Biometrics = Fear
Two cycles ago, the PlayStation 2 game Haunting Ground showed us that a game can be especially frightening when the protagonist can actually be incapacitated by fear. This was based off the character’s reaction to stimuli within the game. What if the character’s ability to handle things at his or her most intense moment was based on real-world player reactions? Microsoft boasts that the Kinect’s capacity for biometrics allows the console to sense your heart-rate. Scary games have a penchant for raising said heart-rate, so it seems like a logical conclusion to bring the two together in union. The calmer you are in real-life, the calmer the hero. This could add another layer to the horror games staples that encourage the player not to engage hostiles, and adds a level of urgency getting to that next room.
Up next are the RPG’s. It takes the longest time for me to get into games like Mass Effect. Don’t get me wrong; I love the story, and may be one in the minority that can honestly say didn’t mind the post-DLC ending one bit. What I’m referring to is the character creation menu. Across multiple playthroughs for the first two games, it took around 30 minutes to create a character that looked like me, and another 15 minutes scraping part of that and settling for a face that gives a rough impression (like, two ballparks’ worth of estimates) of what I may look like in real life. Wouldn’t it be handy to have a device that can snap a photo of my face? The Xbox One just so happens to come packaged with such a device, and besides throwing grenades and having conversations, the last iteration of the camera wasn’t used too often with the trilogy. While I don’t expect a third party developer to utilize this (what with strict cross-platform titles to conform to), but Microsoft could add that level of immersion to its in-house franchises.
Speaking of conversations, there is still more ground to be conquered there. We’ve seen what happens when a game combines the AI with readable facial expressions, but can the opposite be carried out successfully? It could be a cool idea to watch your character’s facial expressions mimic your own when the care you treat the situation with affects whether or not the character wants to cooperate.
The Kinect sensor is supposed to be much more accurate that the last generation of motion controls, right? In theory, the pesky gimmicks of the last generation should be simplified; gesture-centric controls should be based off more fluid and natural controls. This could possibly be combined with the push of a button on the Xbox One’s controller to instruct the Kinect to initiate gesture detection.
Many of these ideas aren’t by any means revolutionary. In fact, they mostly do little to advance to the current possibilities of the technology a lot of us already have in our homes. But the real idea is this: developers will have to take chances on making the Kinect worthwhile over the course of the console cycle. Microsoft more than went out on a limb in refusing to separate it from the console. It becomes an investment to have one, and that functionality will need to be apparent to justify the cost of the machine.