Lab 2.2: Safety Net Construction
Module: 2.2 - Test Generation | ← SlidesDuration: 1 hour Sample Project: node-express-mongoose-demo
Learning Objectives
By the end of this lab, you will be able to:
- Generate unit tests for untested functions
- Create edge case tests (null, empty, boundary conditions)
- Run tests and verify they pass
- Build a safety net before refactoring
Prerequisites
- Completed Lab 2.1 (Plan Mode)
- Understanding of the testing framework in the project
Why Tests Matter
The Golden Rule
"You can't safely refactor code that doesn't have tests."
Tests are your safety harness:
- They prove the code works before you change it
- They catch regressions during refactoring
- They document expected behavior
Setup
# Navigate to the sample project
cd sample-projects/node-express-mongoose-demo
# Install dependencies (if not done)
npm install
# Verify tests run
npm test
# Start Claude Code
claudeTask 1: Find Untested Code
Time: 10 minutes
Identify functions that need test coverage.
Prompts to try:
What functions in this project have no tests?Show me functions in app/controllers that aren't covered by tests.Which critical functions lack test coverage?Pick a target: Choose a function that:
- Is used in multiple places (high impact)
- Has no existing tests
- Has clear inputs and outputs
Success criteria:
- [ ] Identified 2-3 untested functions
- [ ] Selected one critical function to test
Task 2: Generate Basic Tests
Time: 15 minutes
Create a test file for your chosen function.
Prompts to try:
Generate a test file for the create function in app/controllers/articles.js. Use the project's existing test framework.Write unit tests for the User model's authenticate method.Review the output:
- Does it test the happy path?
- Does it use the correct testing framework?
- Are assertions meaningful?
Success criteria:
- [ ] Test file generated
- [ ] Tests run without errors
- [ ] Happy path is covered
Task 3: Add Edge Case Tests
Time: 20 minutes
Expand coverage with edge cases.
Prompts to try:
Add edge case tests for:
- null or undefined inputs
- empty strings or arrays
- boundary conditions
- invalid data typesWhat edge cases should we test for the validateUser function?Add tests for error scenarios in the article controller.Common edge cases to test:
| Category | Examples |
|---|---|
| Null/undefined | null, undefined, missing params |
| Empty | "", [], {} |
| Boundaries | 0, -1, MAX_INT, very long strings |
| Types | Wrong type (string instead of number) |
| State | Not logged in, expired session |
Success criteria:
- [ ] At least 3 edge case tests added
- [ ] Tests handle error scenarios
- [ ] All tests pass
Task 4: Run and Verify
Time: 15 minutes
Make sure all tests pass and coverage improved.
# Run the test suite
npm test
# If coverage report is available
npm run test:coverageVerification checklist:
- [ ] All new tests pass
- [ ] Existing tests still pass
- [ ] No false positives (tests that always pass)
Debugging failed tests:
This test is failing. Can you explain why and fix it?Success criteria:
- [ ] All tests pass
- [ ] Understand what each test verifies
Test Quality Checklist
Good tests should be:
- [ ] Isolated - Each test can run independently
- [ ] Deterministic - Same result every time
- [ ] Readable - Clear what's being tested
- [ ] Fast - Run in milliseconds, not seconds
- [ ] Meaningful - Test behavior, not implementation
Example Test Structure
describe('ArticleController', () => {
describe('create', () => {
it('should create article with valid data', async () => {
// Arrange
const articleData = { title: 'Test', body: 'Content' };
// Act
const result = await controller.create(articleData);
// Assert
expect(result).toHaveProperty('id');
expect(result.title).toBe('Test');
});
it('should throw error with missing title', async () => {
// Arrange
const articleData = { body: 'Content' };
// Act & Assert
await expect(controller.create(articleData))
.rejects.toThrow('Title is required');
});
it('should handle null input', async () => {
await expect(controller.create(null))
.rejects.toThrow();
});
});
});Tips for Success
- Match the project's style - Look at existing tests for patterns
- Test behavior, not implementation - Focus on what, not how
- One assertion per test - Easier to debug when things fail
- Use descriptive names - "should return error when email is invalid"
Troubleshooting
Tests don't run
- Check the testing framework is installed
- Verify the test file naming convention
- Check the test script in package.json
Tests fail unexpectedly
- Read the error message carefully
- Check if mocking is required
- Verify test database setup
Stretch Goals
If you finish early:
- Generate integration tests for an API endpoint
- Add tests for async error handling
- Create a test for a complex multi-step flow
Deliverables
At the end of this lab, you should have:
- New test file(s) committed to the project
- At least 5 tests (happy path + edge cases)
- All tests passing
- Confidence to refactor the tested code
Next Steps
After completing this lab, move on to Lab 2.3: Red-Green-Refactor-AI to practice TDD with Claude.