Back to 2010, there were few GPU based rendering engines, that support reflection and refraction, and use tall and short blocks in Cornell Box (https://www.graphics.cornell.edu/online/box/data.html) instead of spheres. Even today, it doesn't change too much, go and check https://www.google.com/search?q=cornell+box+render&tbm=isch :)
More than 10 years has passed. I miss the days when I was young.
At least some images to remember:
CornellBox scene file for whoever is interested: cornellbox_color.scene.txt
This project aims at porting a CPU raytracer to GPUs(Graphics Processing Units). It achieves up to a 10-fold speedup over an optimized CPU implementation. Two challenges are encountered, one is writing down an iterative alternative to recursion, one is coding a random number generator for GPU kernel.
Photorealistic rendering, such as physically realistic Monte Carlo image rendering, can be improved by accurate BRDF samplings. This project uses samples in CUReT(Columbia-Utrecht Reflectance and Texture Database), and achieves a Monte Carlo raytracer under both complex direct lighting and global illumination.
The input is as below. I use a 3Ă3 Gaussian filter on the dark area of the output image, but 3Ă3 is too small, so you wonât even notice. I am not showing how to apply mapping in OpenGL. Most work is done in fragment shader, not vertex shader. I just use 4 vertices and a bitmap.
GLSL needs us set up an environment for it, it also asks for vertices and textures. Once this job is done, programming is swift and awesome. I like writing C code and run it like a script language without wasting time for compilation. Btw, the project is compilable for Windows and Linux.
The demo here is just for me to understand analygraph 3D. Last week I bought analygraph glasses which have red-cyan filters. I spent several days on this small projects and I believe the similar code could work on OpenGL drivers and it will bring higher fidelity to games and 3D modeling software. Note: you have to put on your analygraph glasses to see the real 3D.
As opposed to other existing optimization-based fairing methods, which are computationally more expensive, Taubin Smoothing is a linear time and space complexity algorithm. With this algorithm, fairing very large surfaces, such as those obtained from volumetric medical data, becomes affordable. Implemented with CGAL: taubin.zip
Popular subdivision algorithms like Catmull-Clark and Loop are C² almost everywhere, but suffer from shape artifacts and reduced smoothness exactly near the so-called âextraordinary verticesâ that motivate their use. The algorithm from paper Bi-3 C² Polar Subdivision is implemented. This program generates surfaces of good shape and piecewise degree bi-3 in the polar setting.
The pole first shown in the following animation is subdivided by bi-3 C² polar subdivision, and the other pole is subdivided by Catmull-Clark subdivision.
For this project we solve two problems, one is photo stitching, and the other is stitching for panoramic scene.
For stitching, given two images in a same scene, we first use SIFT (Scale-invariant feature transform) to find coresponding points, and then we use RANSAC (RANdom SAmple Consensus) to determine which are good coresponding points.
Input images:
Output image after stitching:
Now we know which two images may be connected or say, overlapped. For panoramic scene, we find largest connected component. For example, A and B are connected, and B and C are connected, we can say ABC is the largest connected component. But there is a problem. If C and A are connected there is a cycle. What we do is just break the cycle if it exits.
Actually I learn the algorithm to find largest component from algorithm course.
This project aims to clone a source image patch into a target image. This operation is often carried out by solving a Poisson equation. I implemented this paper and Mean-Value Coordinates (MVC) is used. It is very fast and need small memory usage.
Input images:
Output:
Another application, edge removal:
This project is similar to project âPhoto Stitching for Panoramic Photographyâ. Our target is to make a program for smart phones. Suppose you browse a web page and you find a beautiful image. You click this image using touch screen so that the image is displayed in full screen mode. Now you hold your smart phone and turn around, our program can change the view image according to how much degree you turn.
A panorama of UCI campus:
Given several images we can get derivatives along x axis and y axis, surface fitting is to find a surface which reserve original surface. It is equal to minimize errors for a set of equations. We use Conjugate Gradient Method to solve problems like this.
Input images (they are different :)):
Resulting surface:
The Gaussian sphere usually has unit radius. We can consider a Gaussian sphere as a tetrahedron subdivided for several times. After subdivision, we associate each surface a color. Given a 3D model which is formed by triangles, we can choose two adjacent triangles, calculate their normals and then map the normals onto the Gaussian Sphere. After this mapping operation, if two normals have same color, we can consider the original triangles are on a same plane. If many triangles share same normal on Gaussian sphere, we can present these triangles using only one polygon. Hence we can simplified 3D models.
Original mesh:
Choose triangles that have same normal on Gaussian sphere and form polygons:
Triangulate polygons:
Given a graph G = (V;E), we say that a cycle C in G is a Hamiltonian cycle if it visits each node exactly once. One graph may contain several Hamiltonian cycles, and our job is to find the shortest one. The entry A[i,j] in matrix A below denotes the distance between node i and j. Notice that, we have modified weight value in top-right corner and bottom-left corner to be -999, which is smaller than total weight of any path from start point to end point. Any proper TSP solver will give a path include edge from A1 to A15. Removing this edge we can obtain the desired acyclic path.
Here is an example. Suppose you live in San Diego, and your destination is Yosemite National Park. On your way you want to visit Concord, Fremont, Los Angeles, Monterey, Morro Bay, Oxnard, Petaluma, Richmond, Sunnyvale, Santa Cruz, San Francisco, and Santa Monica as well. What is the shortest path if you donât care about the order? Dijkstraâs algorithm will give you a path which usually ignores cities on the list. In this project I solved the problem preventing losing any single spot on your way. As far as I know, neither any GPS company nor Google Maps provides this function. For GUI, I draw the path with Google Maps API and use JavaScript to show some nice pictures on the web page.
Photos can be browsed by the time order or according to captured locations. Cameras like Nikon Coolpix P6000 embed GPS information in photos. With help from those cameras, we can sort photos by their goe-locations. To solve this problem a solver for TSP (Travelling Salesman Problem) is used.