Python Tip #9 – sorting

Sorting is simplified in Python with sorted(). You can even sort with complex rules.

>>> strings = ['alice', 'bob', 'donald', 'cathy']
>>> sorted(strings)
['alice', 'bob', 'cathy', 'donald']

>>> sorted(strings, key=len)
['bob', 'alice', 'cathy', 'donald']

>>> def secondchar(word):
...    return word[1]

>>> sorted(strings, key=secondchar)
['cathy', 'alice', 'bob', 'donald']

Python Tip #8 – reducing looping by using dicts

In situations where you have a list of objects and have to retrieve then in random order, dictionaries can act as lookup tables.

users = list of class User 

# Get users one by one by looking up ids
user_1 = next((u for u in users if == user_1_id), None)
user_2 = next((u for u in users if == user_2_id), None)

# Simpler solution using lookup table
lookup = dict((, user) for u in users)
user_1 = lookup[user_1_id]
user_2 = lookup[user_2_id]

This tip is not very obvious, hence this explanation:

user_1 = next((u for u in users if == user_1_id), None)

This method employs a iterator looping through the list of users every time we have to find a user, which means we have to run this loop a hundred times. This poses a complexity of O(N2).

lookup = dict((, user) for u in users)
user_1 = lookup[user_1_id]

This method on the other hand iterates through the users list one time and create a lookup table that we can again and again without having to iterate through the list every time. This reduces the complexity to O(N) which could theoretically lead up to 10 times faster execution of the program.

Python Tip #7 – getattr()

Sometimes we have to deal with external objects and their attributes. getattr() can save you at those times.

# Get the attribute name
name =  # AttributeError if name is not present

# Check if the attribute is present before fetching
    name =
except AttributeError:
    name = "Guest"

# Simpler solution
name = if hasattr(obj, "name") else "Guest"

# Simplest Solution
name = getattr(obj, "name", "Guest")

Python Tip #6 – Merging Dictionaries

Merge or combine dictionaries

d1 = { "a": 1 }
d2 = { "b": 2 }

# Adding elements of one dictionary to another
d1.update(d2)  # d1 => { "a": 1, "b": 2 }

# Create a new dict with values from other dictionaries
d3 = { **d1, **d2 }  # d3 => { "a": 1, "b": 2 }
d4 = { **d3, "c": 3 }  # d4 => { "a": 1, "b": 2, "c": 3 }

** is the unpacking operator

Python Tip #5 – Get value from dict if key is present

Check for existence of a key in dictionary and retrieve its value if present.

dictionary = { "key": "value" }

# checking for the presence and key and getting the value
wanted = None
if "key" in dictionary:
    wanted = dictionary["key"]

# Simpler version
wanted = dictionary.get("key", None)

Python Tip #4 – Find an element in a list satisfying condition

What if you want to find the first item that matches the condition instead of getting a list of items?

selected = None
for i in items:
if condition:
selected = i

# Simpler version using next()
selected = next((i for i in items if condition), None)

next() is a built in function which is not that well known.

Sensible Test Data

I am currently working on a project called Peer Feedback, where we are trying to build a nice peer feedback system for college students. We use Canvas Learning Management System (CanvasLMS) API as the data source for our application. All data about the students, courses, assignments, submissions are all fetched from CanvasLMS. The application is written in Python Flask.

Current Setup

We are mostly getting data from API, realying it to frontend or storing it in the DB. So most of our testing is just mocking network calls and asserting response codes. There are only a few functions that contain original logic. So our test suite is focused on those functions and endpoints for the most part.

We recently ran into a situation where we needed to test something that involved the fetching and filtering data from API and retriving data from DB based on the result.

Faker Library and issues

The problem we ran into is, we can’t test the function without first initalizing the database. The code we had for initializing the CanvaLMS used the Faker library, which provides nice fake data to create real world feel for us. But it came with its own set of problems:

Painful Manual Testing

While we had the feel of testing realworld information, it came with real world problems. For e.g., I cannot log in as a user without first looking up the username in the output generated during initialization. So I had to maintian a postit on my desktop, use search functionality to find the user I want to test and copy his email and login with it.


Inconsistency across test cycles

When we write our tests, there is no assurity that we could reference a particular user in the test with the id and expect the parameters like email or user name to match. With the test data being generated with different fake data, any referencing or associaation of values held true only for that cycle. For e.g, a function called get_user_by_email couldn’t be tested because we didn’t know what to expect in the resulting user object.

Complex Test Suite

To compensate for the inconsistency in the data across cycles, we increased the complexity of the test suite, we saved test data in JSON files and used it for validation. It became a multi step process and almost an application on its own. For e.g, the get_user_by_email function would first initialize the DB, then read a json file containing the test data and get a user with email and validate the function, then find the user without an email and validate it throws the right error, find the user with a malformed email… you get the idea. The test function itself not had enough logic warranting a test of test suites.

Realworld problems

With the realworld like data came the realworld problems. The emails generated by faker are not really fake. There is a high chance a number of them are used by real people. So guess what would have happened when we decided to test our email program 🙂

Sensible Test Data

We finally are switching to a more sensible test data for our testing. We are dropping the faker data for user generation, and shifiting to the sequencial user generation system with usernames user001 and emails This solves the above mentioned issues:

  1. Now I can login without having to first look it up in a table. All I need to do is append a integer to the word user
  2. I can be sure that user001 would have the email and that these associations will be consistent across test cycles.
  3. I no longer have to read a JSON file to get a user object and test related information. I can simply pick one using the userXXX template, reducing complexity of the test suite.
  4. And we won’t be getting emails for random people asking us to remove them from mailing lists and probably we are saving ourselves being blacklisted as a spam domain.


Faker provided us with data which helped us test a number of things in the frontend, like the different length in names, multi part names, unique names for testing filtering and searching etc., while also added a set of problems that makes our work difficult and slow.

Our solution to having a sensible dataset for testing is plain numerically sequenced dataset.


Using the generic name tags like user was still causing friction as we have multiple roles like teacher, TAs, students …etc., So I improved it further by creating users like student0000, ta000, `teacher00“.

Python Tip #1 – Setting flags without using if statements

When you have to check for the presence of a value in a list and set a flag based on it, we can avoid typical:

set default => check => update 

routine in Python and condense it to a single line like this.

orders = ['pizza', 'coke', 'fries']
order_book = {}

# Setting an yes or no flag in another dictionary or object
order_book['pizza'] = False
if 'pizza' in orders:
    order_book['pizza'] = True

# Simpler Version
order_book['pizza'] = 'pizza' in orders