COMP3170 Assignment 3

COMP3170 Assignment 3
Objectives
This assignment covers the following topics:
• 3D modelling with triangular meshes
• 3D Transformations
• Perspective & Orthographic cameras
• Fragment shaders
• Illumination and shading
• Texturing
• Post-effects
This is a two-person group assignment. Your task is to build a 3D scene of plane flying over an island, with a lighthouse:
Figure 1: Sample output from the assignment

A basic framework for the assignment is available via the GitHub classroom link on iLearn. This includes model files, textures and example world files.
Important: You should also make sure to pull the latest copy of the LWJGL library and wrapper code from the comp3170-23s1-lwjgl repository, as several elements have been updated for this assignment.
Scene features
Your mark will be based on the features you implement, from Table 1 below. Each feature has a mark value attached. The more challenging elements are marked with an asterisk*.
General requirements
Your scene should be implemented using:
1) Anti-aliasing using 4x multisampling.
2) Backface culling
3) Mipmaps for all textures (with trilinear filtering)
4) Gamma correction (with a default gamma of 2.2)
Correctness marks will be deducted if these not implemented correctly.

Table 1: The list of features with corresponding mark values.
Height map terrain
• UVs and texturing
• Texture blending
• Ambient and diffuse lighting
4% 4% 4% 4% 4%
Lighthouse
• Position interpolation
• UVs and texturing
• Ambient and Diffuse lighting
• Emissive lighting
4% 4% 4% 4%
• UVs and texturing
• Ambient and diffuse lighting
• Specular lighting
• Controls
• Propeller spins
4% 4% 4% 4% 4%
• Map camera (orthographic)
• Plane camera (perspective, following)
• Lighthouse camera (perspective, look-at)
• Transparency by depth
• Diffuse lighting
• Specular lighting
• Underwater post-effect*
4% 4% 4% 4%
• Sun (directional)
• Lighthouse (point)
• Particles*
• Lighting*

Scene files
For this assignment, you will be using a JSON scene file to generate particular elements of your scene, such as a height map for scene geometry, as well as positions and values for various entities (see Table 2 for more information).
The assignment framework includes a JSON parser to read in scene files containing the specific data for you to render. Several sample scenes have been included but your code should be designed to work on any valid scene, not just the ones provided.
You can specify the scene file to read on the command-line, or via the Run > Run Configurations … > Arguments > Program arguments option in Eclipse, as shown in Figure 2 below.
Figure 2: Setting program arguments in Eclipse to load the “Circle Mountain 1.json” scene file.

Figure 3 and Figure 4 below show two examples of scene files, the main difference between the two is how the height map data is specified.
There are two options:
• Specify the filename of a height map image file from which to load the values.
• Specify an array of heights (as well as width and depth values to convert this into a
rectangular array)
Valid heights are values between 0 and 1 (inclusive). If an image file is used, heights are based on the greyscale values of each pixel, with black being 0 and white being 1. A selection of realistic height map files are provided in the maps/images folders.
Specifying the height values by hand can be more useful for testing. We have provided some simple maps for this purpose, but you should create your own for debugging specific features.
“heights”: “maps/images/Circle Mountain 1.png”,
“scale”: [500,100,500],
“water” : {
“height” : 55,
“direction” : [1, 1, 0], “intensity” : [1, 1, 0.9], “ambient” : [0.11, 0.16, 0.18],
“lighthouse” : {
“position” : [250,250],
“intensity” : [0.8,0.8,0.4],
“ambient” : [0.001, 0.001, 0],
“plane” : {
“position”: [1,100,1],
“heading”: 45,
Figure 3: Example JSON scene file loading a height map from the image Circle Mountain 1.jpeg.

“width”: 5,
“depth”: 6,
“heights”: [
0, 0.5, 0.5, 0.5, 0,
0, 0.5, 1.0, 0.5, 0,
0, 0.5, 1.0, 0.5, 0,
0, 0.5, 0.5, 0.5, 0,
0, 0, 0, 0, 0,
“scale”: [160,40,160],
“water” : {
“height” : 10,
“direction” : [1, 1, 0],
“intensity” : [1, 1, 0.9],
“ambient” : [0.11, 0.16, 0.18],
“lighthouse” : {
“position” : [80,80],
“intensity” : [0.8,0.8,0.4],
“ambient” : [0.001, 0.001, 0],
“plane” : {
“position”: [1,1,1],
“heading”: 45,
Figure 4: Example JSON scene file loading a height map from a specified array of heights.
The SceneData class in the assignment framefork provides an interface to load and access this data. Table 2 provides a description of each item and its corresponding field in the SceneData class.

Table 2: Fields in the scene data file.
Description
SceneData field
Map: heights
a) The filename of an image
heightmap to load.
b) An array of height values
between 0 and 1.
float[] mapHeights
Map: width & depth
The shape of the height map in terms of the number of vertices in the x (width) and z (depth) direction. Only needed if heights are specified as an array. Otherwise it is the pixel width and height of the image file.
int mapWidth int mapDepth
Map: scale
The world-size of the map as a 3D vector [x,y,z]. x and z represent the total extent of the map in x and z directions. y represents the height of points with value 1 in the height map.
Vector3f mapScale
Water: height Sun: direction
Sun: intensity & ambient
Plane: position Plane: heading
The height of the water in world units. The source direction vector for the sun as a 3D vector.
The light intensity and ambient intensity for the sun, as RGB values.
The starting position of the plane, in world coordinates.
The starting heading for the plane in degrees from the world z-axis.
float waterHeight Vector3f sunDirection
sunIntensity Vector3f sunAmbient
planePosition
planeHeading
Lighthouse: position
The [x,z] location of the lighthouse in world units. The y value should be calculated from the height map.
lighthousePosition
Lighthouse: intensity & ambient
The light intensity and ambient intensity for the lighthouse light, as RGB values.
lighthouseIntensity
lighthouseAmbient

Components
The individual components for the assignment are described in detail below.
Height map
Figure 5: Height map from Island.json scene: (a) Top down isometric view, (b) side perspective view.
• The height map is a rectangular mesh of (width x depth) vertices with a height value between 0 and 1 specified for each vertex (as described in the week 5 lecture).
• This should be scaled to the size specified by the SceneData.mapScale value.
• Vertex normals should be calculated for each vertex in the mesh, following the method discussed in the week 9 workshop.
• Model normals should be input to the vertex shader and converted to world space using a normal matrix.
• Pressing ‘N’ should enable normal debug mode and display the fragment normals as RGB colours with (r,g,b) = (x,y,z) as shown in Figure 6.

程序代写 CS代考 加QQ: 749389476
UVs and texturing
Figure 6: Top-down and perspective views of terrain in normal debug mode.
• Appropriate vertex UVs should be calculated for each vertex in the mesh.
• The grass texture should be used to colour the mesh.
Texture blending
• Parts of the map below the water level should be textured using the sand texture.
• At the water level, there should be a smooth transition from sand to grass.
Ambient and diffuse lighting
• The map should be lit using ambient and diffuse lighting, based on the day/night setting described under Lights below.
Code Help, Add WeChat: cstutorcs
Lighthouse
The lighthouse mesh is given to you as a Wavefront OBJ file lighthouse.obj. The MeshData class provided will parse this file and provides fields for vertex, normal, uv and index arrays. The Lighthouse class provided demonstrates how to load this data and draw the mesh.
Figure 7: Lighthouse (a) day, (b) night.
Position interpolation
• The position of the lighthouse is given as a 2D (x,z) point in world space. You will need to calculate the y-value for its position using the heightmap. Note that this point may lie inside one of the mesh triangles, in which case you should interpolate the y value from the vertices of the triangle.
UVs and texturing
• Texture coordinates (UVs) for the lighthouse are specified in the MeshData. You should use these coordinate to texture the lighthouse using the lighthouse- diffuse texture provided.
Ambient and Diffuse lighting
• The lighthouse should be lit using ambient and diffuse lighting, based on the day/night setting described under Lights below.
Emissive lighting
• In Night mode (see Lights below) the light of the lighthouse should be brightly lit. Use the lighthouse-emissive texture provided as an emissive material. Calculate lighting as:
where 𝐼 is the intensity of the lighthouse light and 𝜌𝑒 is the emissive material coefficient read from the texture.

The lighthouse mesh is given to you as two Wavefront OBJ files: • plane_body.objdescribingthebodyoftheplane
• plane_propeller.objdescribingthepropeller
The propellor attaches to the body of the plane at (0,0,1.982) in model coordinates.
Figure 8: Textured plane model with diffuse lighting and specular highlights.
UVs and texturing
• Texture coordinates (UVs) for the plane are specified in the MeshData for each part. You should use these coordinates to texture the body and propeller using the plane-diffuse texture provided (both parts use the same texture).
Ambient and diffuse lighting
• The plane should be lit using ambient and diffuse lighting, based on the day/night setting described under Lights below.
Specular lighting
• Specular highlights should be added to the plane, reflecting the light source.
The plane should be controlled using the arrow keys and spacebar:
• Pressing up/down should pitch the plane up and down, to specified maximum
• Pressing left/right should roll the plane left and right, to specified maximum angles.
• When the plane is rolled left (or right) it should turn left (right) at a rate proportional
to the roll angle.
• Pressing space will move the plane forward at a fixed speed. Releasing space will
cause the plane to stop moving forward (immediately).

• The plane’s propellor should rotate.
There are three different camera modes: Map, Plane and Lighthouse, which can be activated using the 1, 2, and 3 keys respectively.
Map camera (orthographic)
Figure 9: Map camera shows a top-down orthographic view of the entire map.
• The map camera is a top-down orthographic view of the entire map.
• Resizing the window should make the map larger or smaller.
• The map should be centred in the window.
• If the aspect of the window does not match the aspect of the map, then black bars
should be drawn on the left and right or top and bottom depending on whether the
window is too wide or too tall.
• Near and far planes should be set so the entire map is visible (including the
lighthouse). If the plane flies above this level, it may not be visible.

Plane camera (perspective, following)
Figure 10: Plane camera shows a perspective view from above and behind the plane.
• The plane camera is a perspective camera that follows the plane.
• The camera should always be above and behind the plane.
• The camera should not rotate when the plane is rolled or pitched.
• The plane should always be centred in the window.
• The camera should have a constant vertical field of view.
• Resizing the window should change the aspect of the camera view volume to match.
• Near and far planes should be set so there is no obvious geometry culling.
Lighthouse camera (perspective, look-at)
Figure 11: Lighthouse camera shows a perspective view of the plane from the lighthouse.
• The lighthouse camera is a perspective camera that follows the plane from the perspective of the lighthouse.
• The camera should be positioned at (0, 30, 0) in lighthouse model coordinates.

• The camera should rotate so that the plane is always centered in the window and vertical lines in the world remain vertical in the view.
• This can be achieved (for partial marks) by using the Matrix4f.lookAt() method in JOML, but for full marks the camera coordinate frame should be constructed directly.
• The camera should have a constant vertical field of view.
• Resizing the window should change the aspect of the camera view volume to match.
• Near and far planes should be set so the entire map is visible (including the
lighthouse). If the plane flies above this level, it may not be visible.
Figure 12: Water with depth-based transparency, diffuse and specular lighting.
• The water should be a flat plane with the same x-z dimensions as the heightmap.
• Pressing the W key should enable and disable the water.
Transparency by depth
• The water should be transparent.
• The alpha value at any point on the water surface should be calculated between
specified minimum and maximum values based on the vertical distance h from the water plane to the height map below, as:
𝛼 = 𝐿𝑒𝑟𝑝(𝛼𝑚𝑖𝑛,𝛼𝑚𝑎𝑥, h) 𝐻
where 𝐻 is the water level specified in the scene data.
Diffuse lighting
• The map should be lit using ambient and diffuse lighting, based on the day/night setting described under Lights below.
• The water should be coloured blue.

Specular lighting
• Specular highlights should be added to the plane, reflecting the light source.
Underwater post-effect*
• When the plane camera is under the water, the whole scene should be blurry and tinted blue using a post-effect filter.
Figure 13: Underwater post-effect with blur and blue tint.

There should be two modes: Day and Night. The lights in the scene change depending on which mode.
• Pressing the D key switches between Day and Night
• During the Day, the sky is blue.
• During the Night, the sky is black.
Figure 14: Lighting the scene in (a) Day mode, (b) Night mode.
Sun (directional)
• During the day, lighting calculations should be done using a directional light with direction and intensity parameters specified by the sun in the scene data.
Lighthouse (point)
• During the night, lighting calculations should be done using a point light located at (0,30,0) in lighthouse model coordinates, with intensity parameters specified by the lighthouse in the scene data.

程序代写 CS代考 加微信: cstutorcs
Particles*
Figure 15: Grass particles (a) enabled, (b) disabled.
• 10000 grass particles should be placed at random points on the height map (above the water level)
• Particles should be drawn using the grass-particle texture provided.
• Particles should be oriented so the y axis is always vertical
• Particles should be rotated so the z axis always points toward the active camera.
• Particles should be drawn using instancing.
• Particles are transparent, so should be drawn in back-to-front order based on their z-
distance from the camera (Painter’s algorithm).
• Pressing the G key should enable and disable the grass particles.
• Grass particles should be lit using ambient and diffuse lighting.
• The normal for a particle should be equal to the heightmap normal for that point.
If the particle lies inside a triangle of the heightmap mesh, the normal should be interpolated from the neighbouring vertex normals.

Documentation
In addition to your code, you should submit a PDF report following the Word template provided. The report should include a completed table indicating the features you have attempted. For attempted components, additional documentation should be given:
• An illustration of the scene graph used in your game.
• Diagrams illustrating the construction of the height map, including:
o Mesh (vertices and triangles) o Normals
• Diagrams illustrating the view volumes of the three camera modes: o Map
o Lighthouse (including the construction of the view matrix)
• Diagrams illustrating the lighting calculations, including:
o Diffuse and specular lighting
o Directional (sun) and point (lighthouse) lights
o Orthographic (map) and Perspective (plane) cameras
• Diagrams illustrating the construction of the grass particle, including: o The particle mesh
o Construction of the model matrix o Lighting calculations
Submission
Your Java project will be submitted using Github Classroom. Your most recent commit to the repository before the assignment deadline will be marked.
Your report will be submitted using iLearn as a PDF.
Peer assessment
You will also submit individual peer assessment reports, using the template provided on iLearn, to assess both your own contribution and that of your teammate. You need to provide a grade (following the rubric given in the template) and a justification for the grade. This grade will be kept private from your teammate but will be used as evidence to adjust the final individual grade weighting.

Your marks will be calculated using three components (according to the rubric below):
• Correctness: Whether your code is correctly implemented.
• Clarity: Whether your code is easy to understand.
• Documentation: Whether your report contains all the required elements.
This mark will be scaled based on a Completeness mark, which refers to the total value of the components you have attempted.
Your final group mark will be determined using the formula:
SQRT(Completeness * (60% * Correctness + 20% * Clarity + 20% * Documentation))
So, for example if you attempt 80% of the features above, with perfect correctness (100%), slightly sloppy code (70%) and some minor sloppiness in the document (80%), your final mark would be:
SQRT(80% * (60% * 100% + 20% * 70% + 20% * 80%)) = SQRT(80% * (60% + 14% + 16%)
= SQRT(80% * 90%)
On the other hand, if you only attempt 50% of the features above, to the same level of quality (90%) your final mark would be:
SQRT(50% * 90%) = 67.1%
Individual marks will be assigned based on peer assessment and evidence of contribution (e.g. commits in the GitHub repo). Generally, your individual mark will be equal to your group mark unless there is evidence of a substantial disparity in the amount of work performed by each group member.

Grade Correctness (60%) Clarity (20%) Documentation (20%)
Excellent work. Code is free from any apparent errors. Problems are solved in a suitable fashion. Contains no irrelevant code.
Good consistent style. Well structured & commented code. Appropriate division into classes and methods, to make implementation clear.
All sections are complete and accurately represent the code. All diagrams are neat, clear, and well annotated.
Very good work. Code has minor errors which do not significantly affect performance. Contains no irrelevant code.
Code is readable with no significant code-smell. Code architecture is adequate but could be improved.
All sections attempted with minor sloppiness or missing detail. No discrepancies between documentation and code.
Good work. Code has one or two minor errors that affect performance. Problems may be solved in ways that are convoluted or otherwise show lack of understanding. Contains some copied code that is not relevant to the problem.
Code is readable but has some code-smell that needs to be addressed. Code architecture is adequate but could be improved.
All sections attempted with minor sloppiness or missing detail. Minor discrepancies between documentation and code.
Poor. Code is functional but contains major flaws. Contains large passages of copied code that are not relevant to the problem.
Significant issues with code quality. Inconsistent application of style. Poor readability with code-smell issues. Code architecture could be improved.
All sections attempted with significant sloppiness and missing detail. Minor discrepancies between documentation and code.
Code compiles and runs, but major elements are not functional.
Significant issues with code quality. Inconsistent application of style. Poor readability with code-smell issues. Messy code architecture with significant encapsulation violations.
Some aspects incomplete. Diagrams unclear and badly drawn. Does not make use of graph paper. Coordinate systems not properly annotated. Major discrepancies between documentation and code.