# COMP1531 Major Project
**� � UNSW Memes � �**
## Contents
## 0. Aims:
1. Demonstrate effective use of software development tools to build full-stack end-user applications.
2. Demonstrate effective use of static testing, dynamic testing, and user testing to validate and verify software systems.
3. Understand key characteristics of a functioning team in terms of understanding professional expectations, maintaining healthy relationships, and managing conflict.
4. Demonstrate an ability to analyse complex software systems in terms of their data model, state model, and more.
5. Understand the software engineering life cycle in the context of modern and iterative software development practices in order to elicit requirements, design systems thoughtfully, and implement software correctly.
6. Demonstrate an understanding of how to use version control and continuous integration to sustainably integrate code from multiple parties.
## 1. Overview
UNSW’s revenue has been going down, despite the absolutely perfect MyExperience feedback.
Realising the bright potential of its students to recreate existing products they pay for, UNSW has tasked me and my army of COMP1531 students with recreating **Microsoft Teams**.
The 23T1 cohort of COMP1531 students will build the **backend Javascript server** for a new communication platform, **UNSW Memes** (or just **Memes** for short). We plan to task future COMP6080 students to build the frontend for Memes, something you won’t have to worry about.
**UNSW Memes** is the questionably-named communication tool that allows you to share, communicate, and collaborate virtually on a meme-like budget.
We have already specified a **common interface** for the frontend and backend to operate on. This allows both courses to go off and do their own development and testing under the assumption that both parties will comply with the common interface. This is the interface **you are required to use**.
The specific capabilities that need to be built for this project are described in the interface at the bottom. This is clearly a lot of features, but not all of them are to be implemented at once.
UNSW thanks you for doing your part in saving them approximately $100 per student, per year, despite making you pay for this course. �
(For legal reasons, this is a joke).
## 2. Iteration 0: Getting Started
Now complete!
## 3. Iteration 1: Basic Functionality and Tests
[Watch the iteration 1 introductory video here.](https://youtu.be/_pLMyzA5sKA)
Please note that this video was recorded in 22T2. You should consult this spec for minor changes.
### 3.1. Task
In this iteration, you are expected to:
1. Write tests for and implement the basic functionality of Memes. The basic functionality is defined as the `auth*`, `channel*`, `channels*`, `users*`, `other*` capabilities/functions, as per the interface section below.
* Test files you add should all be in the form `*.test.js`.
* Do NOT attempt to try and write or start a web server. Don’t overthink how these functions are meant to connect to a frontend yet. That is for the next iteration. In this iteration you are just focusing on the basic backend functionality.
2. Write down any assumptions that you feel you are making in your interpretation of the specification.
* The `assumptions.md` file described above should be in the root of your repository. If you’ve not written markdown before (we assume most of you haven’t), it’s not necessary to research the format. Markdown is essentially plain text with a few extra features for basic formatting. You can just stick with plain text if you find that easier.
* We will only be marking the quality of SIX of your assumptions. You can indicate which ones you would like marked, otherwise we will look at the first six.
3. Follow best practices for git, project management, and effective teamwork, as discussed in lectures.
* The marking will be heavily biased toward how well you follow good practices and work together as a team. Just having a “working” solution at the end is not, on its own, sufficient to even get a passing mark.
* You need to use the **GitLab Issue Boards** for your task tracking and allocation. Spend some time getting to know how to use the taskboard. If you would like to use another collaborative task tracker e.g. Jira, Trello, Airtable, etc. you must first get approval from your tutor and grant them administrator access to your team board.
* You are expected to meet regularly with your group and document the meetings via meeting minutes, which should be stored at a timestamped location in your repo (e.g. uploading a word doc/pdf or writing in the GitLab repo Wiki after each meeting).
* You should have regular standups and be able to demonstrate evidence of this to your tutor.
* For this iteration, you will need to collectively make a minimum of **12** merge requests into `master`.
### 3.2. Implementing and testing features
You should first approach this project by considering its distinct “features”. Each feature should add some meaningful functionality to the project, but still be as small as possible. You should aim to size features as the smallest amount of functionality that adds value without making the project more unstable. For each feature you should:
1. Create a new branch.
2. Write tests for that feature and commit them to the branch. These will fail as you have not yet implemented the feature.
3. Implement that feature.
4. Make any changes to the tests such that they pass with the given implementation. You should not have to do a lot here. If you find that you are, you’re not spending enough time on your tests.
5. Consider any assumptions you made in the previous steps and add them to `assumptions.md`.
6. Create a merge request for the branch.
7. Get someone in your team who **did not** work on the feature to review the merge request.
8. Fix any issues identified in the review.
9. Merge the merge request into master.
For this project, a feature is typically sized somewhere between a single function, and a whole file of functions (e.g. `auth.js`). It is up to you and your team to decide what each feature is.
There is no requirement that each feature is implemented by only one person. In fact, we encourage you to work together closely on features, especially to help those who may still be coming to grips with Javascript.
Please pay careful attention to the following:
* We want to see **evidence that you wrote your tests before writing the implementation**. As noted above, the commits containing your initial tests should appear *before* your implementation for every feature branch. If we don’t see this evidence, we will assume you did not write your tests first and your mark will be reduced.
* Merging in merge requests with failing tests is **very bad practice**. Not only does this interfere with your team’s ability to work on different features at the same time, and thus slow down development, it is something you will be penalised for in marking.
* Similarly, merging in branches with untested features is also **very bad practice**. We will assume, and you should too, that any code without tests does not work.
* Pushing directly to `master` is not possible for this repo. The only way to get code into `master` is via a merge request. If you discover you have a bug in `master` that got through testing, create a bugfix branch and merge that in via a merge request.
* As is the case with any system or functionality, there will be some things that you can test extensively, some things that you can test sparsely/fleetingly, and some things that you can’t meaningfully test at all. You should aim to test as extensively as you can, and make judgements as to what things fall into what categories.
### 3.3. File structure and stub code
The tests you write should be as small and independent as possible. This makes it easier to identify why a particular test may be failing. Similarly, try to make it clear what each test is testing for. Meaningful test names and documentation help with this. An example of how to structure tests has been done in:
* `src/echo.js`
* `src/echo.test.js`
The echo functionality is tested, both for correct behaviour and for failing behaviour. As echo is relatively simple functionality, only 2 tests are required. For the larger features, you will need many tests to account for many different behaviours.
The files from iteration 0 should be developed with actual implementations, in addition to the new `other.js` and `users.js` files:
* `auth.js`
* `channel.js`
* `channels.js`
* `users.js`
* `other.js`
The `userProfileV1` function should be included in `users.js`, and the `clearV1` function should be included in `other.js`.
### 3.4. Authorisation
Elements of securely storing passwords and other tricky authorisation methods are not required for iteration 1. You can simply store passwords plainly, and use the user ID to identify each user. We will discuss ways to improve the quality and methods of these capabilities in the later iterations.
Note that the `authUserId` variable is simply the user ID of the user who is making the function call. For example,
* A user registers an account with UNSW Memes and is assigned some integer ID, e.g. `42` as their user ID.
* When they make subsequent calls to functions, their user ID – in this case, `42` – is passed in as the `authUserId` argument.
Since `authUserId` refers to the user ID of the user calling the functions, you do NOT need to store separate user IDs (e.g. a uId or userId + a authUserId) to identify each user in your data structure – you only need to store one user ID. How you name this user ID property in your data structure is up to you.
### 3.5. Test writing guidelines
To test basic functionality you will likely need code like:
“`javascript
let result = ‘Jake’, ‘Renzella’)
// Expect to work since we registered
However, when deciding how to structure your tests, keep in mind the following:
* Your tests should be *black box* unit tests.
* Black box means they should not depend your specific implementation, but rather work with *any* working implementation. You should design your tests such that if they were run against another group’s backend they would still pass.
* For iteration 1, you should *not* be importing the `data` object itself or directly accessing it via the `get` or `set` functions from `src/dataStore.js` inside your tests.
* Unit tests mean the tests focus on testing particular functions, rather than the system as a whole. Certain unit tests will depend on other tests succeeding. It’s OK to write tests that are only a valid test if other functions are correct (e.g. to test `channel` functions you can assume that `auth` is implemented correctly).
* Avoid writing your tests such that they need to be run in a particular order. That can make it hard to identify what exactly is failing.
* You should reset the state of the application (e.g. deleting all users, channels, messages, etc.) at the start of every test. That way you know none of them are accidentally dependent on an earlier test. You can use a function for this that is run at the beginning of each test (hint: `clearV1`).
* If you find yourself needing similar code at the start of a series of tests, consider using Jest’s **beforeEach** to avoid repetition.
### 3.6. Storing data
Nearly all of the functions will likely have to reference some “data source” to store information. E.G. If you register two users, create two channels, and then add a user to a channel, all of that information needs to be “stored” somewhere. The most important thing for iteration 1 is not to overthink this problem.
Firstly, you should **not** use an SQL database, or something like firebase.
Secondly, you don’t need to make anything persist. What that means is that if you run all your tests, and then run them again later, it’s OK for the data to be “fresh” each time you run the tests. We will cover persistence in another iteration.
Inside `src/dataStore.js` we have provided you with an object called `data` which will contain the information that you will need to access across multiple functions. An explanation of how to `get` and `set` the data is in `dataStore.js`. You will need to determine the internal structure of the object. If you wish, you are allowed to modify this data structure.
For example, you could define a structure in a file that is empty, and as functions are called, the structure populates and fills up like the one below:
“`javascript
let data = {
‘users’: [
‘name’ : ‘user1’,
‘name’ : ‘user2’,
‘channels’: [
‘name’ : ‘channel1’,
‘name’ : ‘channel2’,
### 3.7. Dryrun
We have provided a very simple dryrun for iteration 1 consisting of 4 tests, one each for your implementation of `clearV1`, `authRegisterV1`, `channelsCreateV1`, and `channelsListV1`. These only check the format of your return types and simple expected behaviour, so do not rely on these as an indicator of the correctness of your implementation or tests.
To run the dryrun, you should be on a CSE machine (i.e. using `VLAB` or `ssh`’ed into CSE) and in the root directory of your project (e.g. `/project-backend`) and use the command:
1531 dryrun 1
Tips to ensure dryrun runs successfully:
* Files used for imports are appended with `.js` e.g. `import { clearV1 } from ‘./other.js’;`
* Files sit within the `/src` directory
### 3.8. Bad Assumptions
Here are a few examples of bad assumptions:
* Assume that all groups store their data in a field called data which is located in dataStore.js
* Assume all individual return values are returned as single values rather than being stored in an object
* Assume the functions are written correctly
* Assume the input authUserId is valid
Bad assumptions are usually ones that directly contradict an explicit or implicit requirement in the specification. Good assumptions are ones that fill holes or gaps in requirements.
Avoid “assumptions” that simply describe the implementation details irrelevant to the client, e.g. a particular method of ID generation. Instead, consider the scenarios in which the expected behaviour of Memes is not addressed clearly in the spec and document, with reasoning, your assumptions regarding such scenarios.
### 3.9. Working in parallel
This iteration provides challenges for many groups when it comes to working in parallel. Your group’s initial reaction will be that you need to complete registration before you can complete channel creation, and then channel creation must be done before you can invite users into channels, etc.
There are several approaches that you can consider to overcome these challenges:
* Have people working on down-stream tasks (like the channels implementation) work with stubbed versions of the up-stream tasks. E.g. The register function is stubbed to return a successful dummy response, and therefore two people can start work in parallel.
* Co-ordinate with your team to ensure prerequisite features are completed first (e.g. Giuliana completes `authRegister` on Monday meaning Hayden can start `channelsCreate` on Tuesday).
* You can pull any other remote branch into your own using the command `git pull origin
* This can be helpful when two people are working on functions on separate branches where one function is a prerequisite of the other, and an implementation is required to keep the pipeline passing.
* You should pull from `master` on a regular basis to ensure your code remains up-to-date.
### 3.10. Marking Criteria
For this and for all future milestones, you should consider the other expectations as outlined in section 6 below.
The formula used for automarking in this iteration is:
`Mark = t * i` (Mark equals `t` multiplied by `i`)
* `t` is the mark you receive for your tests running against your code (100% = your implementation passes all of your tests)
* `i` is the mark you receive for our course tests (hidden) running against your code (100% = your implementation passes all of our tests)
### 3.11. Submission
This iteration’s due date and demonstration week are stated in section 5. You will demonstrate this submission in line with the information provided in section 5.
### 3.12. Peer Assessment
Reference 6.5.
## 4. Interface specifications
These interface specifications come from Hayden & COMP6080, who are building the frontend to the requirements set out below.
### 4.1. Input/Output types
#### 4.1.1. Iteration 0+ Input/Output Types
#### 4.1.2. Iteration 1+ Input/Output Types
### 4.2. Interface
#### 4.2.2. Iteration 1 Interface
All return values should be an object, with keys identically matching the names in the table below, along with their respective values.
All error cases should return {error: 'error'}
, where the error message in quotation marks can be anything you like (this will not be marked).
authLoginV1
Given a registered user’s email and password, returns their authUserId
value.
( email, password )
Return type if no error:{ authUserId }
Return object {error: 'error'}
when any of:
authRegisterV1
Given a user’s first and last name, email address, and password, creates a new account for them and returns a new authUserId
.
A unique handle will be generated for each registered user. The user handle is created as follows:
( email, password, nameFirst, nameLast )
Return type if no error:{ authUserId }
Return object {error: 'error'}
when any of:
channelsCreateV1
Creates a new channel with the given name, that is either a public or private channel. The user who created it automatically joins the channel.
( authUserId, name, isPublic )
Return type if no error:{ channelId }
Return object {error: 'error'}
when any of:
authUserId
is invalidchannelsListV1
Provides an array of all channels (and their associated details) that the authorised user is part of.
( authUserId )
Return type if no error:{ channels }
Return object {error: 'error'}
when any of:
authUserId
is invalidchannelsListAllV1
Provides an array of all channels, including private channels (and their associated details)
( authUserId )
Return type if no error:{ channels }
Return object {error: 'error'}
when any of:
authUserId
is invalidchannelDetailsV1
Given a channel with ID channelId
that the authorised user is a member of, provides basic details about the channel.
( authUserId, channelId )
Return type if no error:{ name, isPublic, ownerMembers, allMembers }
Return object {error: 'error'}
when any of:
channelId
does not refer to a valid channelchannelId
is valid and the authorised user is not a member of the channelauthUserId
is invalidchannelJoinV1
Given a channelId
of a channel that the authorised user can join, adds them to that channel.
( authUserId, channelId )
Return type if no error:{}
Return object {error: 'error'}
when any of:
channelId
does not refer to a valid channelchannelId
refers to a channel that is private, when the authorised user is not already a channel member and is not a global ownerauthUserId
is invalidchannelInviteV1
Invites a user with ID uId
to join a channel with ID channelId
. Once invited, the user is added to the channel immediately. In both public and private channels, all members are able to invite users.
( authUserId, channelId, uId )
Return type if no error:{}
Return object {error: 'error'}
when any of:
channelId
does not refer to a valid channeluId
does not refer to a valid useruId
refers to a user who is already a member of the channelchannelId
is valid and the authorised user is not a member of the channelauthUserId
is invalidchannelMessagesV1
Given a channel with ID channelId
that the authorised user is a member of, returns up to 50 messages between index “start” and “start + 50”. Message with index 0 (i.e. the first element in the returned array of messages
) is the most recent message in the channel. This function returns a new index “end”. If there are more messages to return after this function call, “end” equals “start + 50”. If this function has returned the least recent messages in the channel, “end” equals -1 to indicate that there are no more messages to load after this return.
( authUserId, channelId, start )
Return type if no error:{ messages, start, end }
Return object {error: 'error'}
when any of:
channelId
does not refer to a valid channelstart
is greater than the total number of messages in the channelchannelId
is valid and the authorised user is not a member of the channelauthUserId
is invaliduserProfileV1
For a valid user, returns information about their user ID, email, first name, last name, and handle
( authUserId, uId )
Return type if no error:{ user }
Return object {error: 'error'}
when any of:
uId
does not refer to a valid userauthUserId
is invalid