Learning Unreal Blueprints for VR

I enrolled in Udacity’s Learn Unreal VR Program to gain a better understanding of Unreal Engine and to learn blueprint coding. I have used my VR headset for gaming and 3D modeling purposes, but never to develop any sort of interface or user interaction. As these are growing interests of mine, this course was a perfect fit to gain new skill sets in these areas.

The course’s primary focus was not to teach you how to model or bring assets into Unreal, but rather on how to create an experience for the user. For example, I learned how to set up a player pawn and user controls, set up blueprints for assets to make them interactive to the user, and how to begin creating interfaces for timers, scores, etc.

The first project was a to create a game called “Kitchen Cleanup”. The game involved the user picking up randomly spawned plates and cleaning them in the sink. My job was to:

  1.  Create a player pawn with a controller that had the ability to interact with plates (using blueprint interfaces)
  2.  Spawn plates every few seconds that could be interacted with (randomly spawned at target points)
  3.  Create a sink that would read if a plate was in it (AKA overlapping it’s collision box) and “clean it” (destroy the actor after a 2 second delay)
  4. Develop a timer and score system that the player could see, a method to start and end the game (event dispatchers) and a start menu (interface widgets)

The course ultimately gave me the skills I needed to develop the game. Here’s a preview (sorry, WordPress only accepts my grainy GIF rather than video):

KCU.gif

Other than the actual mechanics of the game, I threw together the main menu logo in Illustrator:KitchenCleanUp.png

I also added a particle emitter to depict “bubbles” near the sink (a texture created in photoshop). Overall, the game was a blast to make. I will make a second post regarding the second project once it’s been reviewed by Udacity.

 

Advertisements

More VR Stuff, this Time with the Rift

I am fortunate enough to have a wonderful brother just as interested (if not more so) in emerging tech, specifically with virtual reality. He already owns a Vive, but decided to order an Oculus Rift a few months back. Due to some fluke, Oculus sent him two Rift Touches by accident, one of which he lent to me. I’ve been having tons of fun with it, but more importantly, I’m learning how it can be useful in existing fields. This is particularly due to the fact that I work at an architecture firm that is currently conducting research in how VR can be used with architectural visualization. This isn’t exactly a brand new topic, but I’m really excited to be personally involved in the research, both at home and at my workplace.

In terms of architectural visualization, I think the biggest, most exciting software for me right now is TwinMotion for VR. I learned about TwinMotion from one of Fabrice Bourrelly’s Unreal Archiectural Visualization webinars (check them out here when you get the chance). I am currently learning Unreal Engine as I am really interested in building my own VR environments from the ground up (textures, animations, lighting, the whole deal). Of course, this takes time and energy, which most of us don’t have much to spare. TwinMotion is a more “plug and play” software for VR, with built in material, lighting, and animation presets. Essentially, all you have to do is bring in a model (whether it be a Revit, Rhino, Cinema 3D model, among others) and add in whatever you like. Another pro: the software is compatible with most VR headsets. The software is still in its early stages, but I’m looking forward to seeing what they’ll have in store for us. Check them out here.

On to a more lighthearted, fun topic- Games and Apps with the VR! So I love messing around with my headset, and there are some really fun games and things to do while in virtual reality. Of course, my favorite moment was when both my boyfriend and brother played AFFECTED – The Manor, a horror game. I was a wimp, while both of them were very brave facing ghosts and goblins and scary things (though the bf did scream like a girl a couple times). Truly a terrifying experience. Another cool App I discovered was Medium, a sculpting app. Imagine 3DSMax or Mudbox, but rather than staring at a monitor and sculpting with a keyboard and mouse, you create models within VR. Your canvas is the virtual world within the headset, and your sculpting tools are your hands (well, the touch handset you hold… but you get the point).

I will try to post a time lapse video at some point of me using Medium, but for now I can share a few screen grabs of my latest creation- a silly octopus. The app was a little finicky with layers and resolution (Medium actually began to crash after I added too many suckers, but I should have expected that… I was modeling a large model in VR…) and my head hurt after a couple hours of being in the headset, but overall really fun. I can see artists using Medium to create large virtual environments with crazy creations.

katie1422_2017-07-12_19-55-33.png

katie1422_2017-07-16_20-46-09.png

Electroplating 3D Prints

I’ve always been interested in 3D printing in different materials or even possibly coating prints with paint or metal. During my time at UVA’s architecture school, I printed in nylon, wood filament, and many other crazy materials. I’ve also tested smoothing ABS with acetone vapor (works wonderfully). However, I really wanted to try something new, especially now that Techshop was at my disposal. My friend/coworker Brett runs an electroplating class, which I decided to take one day after work. Basically, the idea of electroplating is to use an electric current to coat a conductive object (typically some sort of metal) with, well, a different type of metal. It’s extremely useful and can be used for many different things, such as to decorate, to harden objects, or to protect from corrosion. In Brett’s class, we plated a copper penny with nickel- the process was far simpler than I originally imagined. You mix together nickel acetate (easy buy from Amazon) with vinegar in a plastic container. Once that’s all nice and mixed up, you connect whatever you are trying to electroplate (in our case, copper) to the cathode aka negative side of a small power supply  (6V battery) and the metal you plan to electroplate with (nickel) to the anode aka positive side. Place both in the nickel acetate vinegar bath, turn on the power supply to about 4 V (best to keep the voltage low) and wait. You will begin to see a coating form over your penny!

I was extremely pleased with the results and wondered if I could somehow use the same technique to coat a 3D print. I did some research online and saw that many people had tried it and got some pretty awesome results. It seemed that the cheapest method was using a graphite based coating, which would make the 3D print conductive. I decided I would try out a combination of acetone and graphite powder- the acetone, in theory, would cause ABS plastic to melt a bit (remember acetone vapor smoothing) and therefore act as a adhesive for the graphite. I purchased some graphite powder and acetone and mixed it. Oh my goodness, the graphite got everywhere! My hands were covered in this stuff for days. But, the mixture turned out very well in my opinion. I found an old ABS print I didn’t mind testing on and coated it with my solution. Here is how the solution looked and the graphite powder I purchased (noticed I kept it in a plastic baggy at ALL TIMES):

Once it was coated, I used the same method as Brett showed in class, but rather than connecting a copper penny to the cathode, I connected my graphite covered 3D print (well, I wrapped nickel wire around the print to ensure it was secure and connected that the cathode). Unfortunately, I did not take any photos of that rig, but I do have a picture of how it turned out:

IMG_8126

First try went way better than I expected. I honestly didn’t think it was going to work at all (my coworkers at Techshop had their doubts as well). The dark gray areas are places that didn’t take the nickel, but the lighter areas are locations coated in nickel- success! Now that trial 1 was over, it was time to move on to bigger things. Such as JEWELRY.

I designed a basic parametric bracelet in grasshopper and printed it on a Stratasys Mojo. It came out with a lot of support material, so it had to sit in the bath for a while. Here it is covered in support:

FLEBEUJIPVHBSUQ.LARGE

Afterwards, I coated the bracelet with my acetone graphite solution. I did about 3 coats since the design was so complex; I had to ensure I got every little crevice. Here it is as I’m beginning to coat it:

FJ8XSPWIPVHBSXY.LARGE

Alright, now it’s time to electroplate it! I set up the rig and connected the 3D print using A LOT of wire. I then carefully placed it in the nickel acetate bath and let it sit for 7 hours. It’s like watching paint dry:

Here you can finally see what the rig looks like. That rectangular object is my chunk of nickel, connected to the anode of my power supply. I had to rotate my bracelet every so often since my solution didn’t fully cover it. You can see the nickel beginning to cover the print:

F1UK3UCIPVHBSXZ.LARGE

After seven long hours, I pulled the print out and was amazed at the results and how well the graphite took the nickel. Of course, it wasn’t perfect. There were areas of the print that didn’t take as much and some areas looked a little clumpy. Additionally, the metal was not polished. I purchased some Simichrome and used that the polish it up. It actually worked pretty well. Here is an image of the final product:

FUKVS67IQV7HXKF.LARGE.jpg

Pretty cool stuff. I plan to test it on other objects, but for now I’m pleased I created a new trendy bracelet I can wear 🙂

3DDC 2016

On April 14th, I attended 3DDC, a 3D printing policy event hosted by Public Knowledge. I know this is a rather delayed post, but I think some of the topics brought up at the 3DDC event are worth discussing in this blog. There were panels of expert makers and 3D printing specialists, including some of our own from Techshop. The panels focused on 3D printing in regards to STEAM education, the environment, bridging the workforce skills gap, and the arts.

I attended the workforce gap and arts panels and was intrigued by some of the issues brought up by both the audience and panelists. For example, the workforce gap panel discussed the difficulties in teaching older makers how to use new technologies. As someone who grew up using a computer and learned to 3D model at a young age, I had never really thought about this. I always thought desktop 3D printers were relatively simple to use. Export the model as an STL, send it to the printer, make sure there is enough filament, hit the start button and *voila* a few hours later you have a print (ignoring the potential extruder clog- looking at you, Makerbot). But this process might not be as intuitive to someone who hasn’t used a computer from a young age or seen a 3D printer in action. While working at Techshop, I remember a lady calling in and asking if she could purchase ink and paper for our shop’s 3D printer. Of course it seemed funny at the time, but unless you’ve used a 3D printer, you probably wouldn’t know what the filament was made out of or how to load it into the printer. I can understand how learning to use this technology would be frustrating to an older audience. The panel discussed methods of teaching these new technologies to an older age group, from providing free classes at the library to holding workshops for retired veterans at Techshop. I believe you can “teach an old dog new tricks”, but it will take time and effort. Repetition and consistency is key in learning how to use machines and software; conduct tasks over and over until it is ingrained.

The first topic of conversation during the arts panel was using 3D scanning/printing to create replicas of famous pieces of art. The paradigm case: a 3D scan of Nefertiti’s bust. The bust is currently located in the Neues Museum in Berlin and is the subject of ownership conflict between Germany and Egypt. Two artists, Nikolai Nelles and Nora Al-Badri, snuck a 3D scanner into the museum and were able to gather enough data to create a detailed 3D replica of the bust, which they uploaded online and had this to say: “With the data leak as a part of this counter narrative we want to activate the artefact, to inspire a critical re-assessment of today’s conditions and to overcome the colonial notion of possession in Germany.” Though new information may have ousted the whole heist as a hoax, it brings up important issues with how we view the intersection of art and technology. What’s the difference between taking a picture at a museum versus a 3D scan? When does it become theft of cultural and artistic property? Does 3D printing an art piece make it a counterfeit? Does it matter who is overseeing the scanning and printing? Many museums are using the technology to preserve and document their collections. For example, look at the work the Smithsonian is conducting: http://www.3d.si.edu/. So what do you think? Is 3D scanning and printing detrimental or beneficial to how we see art?

The arts panel also brought in one of my favorite artists, Francis Bitonti. You might know him for his famous Dita Von Teese 3D printed dress (it’s killer). He is one of the most prominent and innovative artists using 3D printing and I’m excited to see what he has in store for us in the future. Here’s a picture of him during the panel, as well as his 3D printed dress. Overall, I had a great time at 3DDC and was left with many questions about the future of 3D printing.

 

My first Vive Experience

I’ve been super excited for the release of the Vive ever since I saw this video of artist Alex Briskham creating 3D art with the Vive: https://www.youtube.com/watch?v=EYY-DZ14i9E. As an artist and architect, I am interested in how VR technology will play a role in the representation of 3D spaces. Currently, most architecture firms utilize renders, plans, and sections to convey both interior and exteriors of buildings. In undergrad, I would try to create animations and “walkthroughs” of 3D models I created. This is all well and good, but imagine literally walking in 3D digital representation of the space! VR allows users to really experience a space as they would in real life. That’s incredible.

My brother decided to purchase a Vive and was kind enough to lend it to my boyfriend Sasha and I for a few hours. We hooked it up to my rig (the Vive headset screen runs at a cool 90Hz refresh rate and a resolution of 3024×1680 and therefore needs some serious computing power) and set up the motion tracking cameras which detect the sensors in both the headset and controllers. When I first put it on, I was completely baffled at how realistic it felt. The visuals, along with the completely accurate controllers, really helped sell it as virtual reality. My favorite part was sitting atop Vesper mountain and playing with a robot dog whereas Sasha really enjoyed playing as an archer in the game Bowslinger (when you release the arrow, the controllers quiver as a bow would- crazy stuff).

Anyways, I thought I would share some images of mine and Sasha’s first experience with the Vive. Check ’em out!

 

Digital Documentation for Heritage Preservation: DC Symposium

I was fortunate enough to attend NCPTT’s Digital Documentation for Heritage Preservation Symposium yesterday, hosted by Mount Vernon. There were some pretty cool people there, including a group from the National Park Service (I got into a great conversation about diversity in parks with a young woman from NPS) and some architecture firms specializing in historic preservation. In my past, I worked at Monticello as an intern sorting through all of their old architecture documents (random side note: I once found a signed letter from Franklin Roosevelt while working), so I’m very interested in overlaps between technology and historical preservation.

There were two lectures that really stood out to me. Firstly, there was the HABS, HAER, HALS lecture by Richard O’Connor from NPS. Richard first discussed how HABS, which was developed 1933, set a precedent for documentation standards in preservation. He also discussed how HABS, HAER, HALS were the first heritage documentation programs to be digitized due to their value for K-12 education (apparently before this, you would have to go to the Library of Congress to view any of the documents). He then spoke on the pros and cons to laser scanning and digital documentation over manual documentation. He told us that some issues with laser scanning were that 1) people working with laser scanners had to have a clear understanding of the tech and specific training on how to use both the hardware and software and 2) there is a huge amount of data that comes from laser scanning a site, therefore an office must have high computing power to handle and sort the data. On the other side, laser scanning is extremely useful for fragile resources. Some sites won’t let preservationists conduct manual documentation because the site is easily damaged, whereas laser scanning is not disruptive and allows data capture in a timely manner. My office at the Department of State hopes to digitally preserve our overseas buildings, so conversations on laser scanning are particularly interesting to me.

The second lecture was led my Terry Kilby regarding drones being used to capture 3D data. Terry owns and runs his own drone company called Elevated Elements-he’s collected 3D data on multiple Baltimore sites using his drones. Currently, most drones use photogrammetry, which means taking photos in a grid pattern with 70-80% overlap to capture a site. He also discussed how some drones will utilize laser scanning capture in the future, which I thought was really cool. And just recently, sense and avoid drones were developed, meaning the drones will sense an obstruction in their flight path and move around it. Though I’ve never personally flown a drone, the technology is something I am interested in (specifically because 3D printing drone components is possible nowadays).

Overall, a great lecture series! I’m looking forward to seeing where this tech moves in the future.