Flask Marshmallow – has no attribute data

TL,DR-

Remove .data after the schema.dump() calls in the code. Marshmallow 3 supplies the data directly.

The Issue

If you use Flask-Marshmallow in your Flask application for serialisation of models, then there are chances that you will run into this error when you upgrade dependencies. The reason for this issue is Flask Marshmallow has added support for Marshmallow 3 when moving from version 0.10.0 to 0.10.1. And Marshmallow 3 returns the data directly when the dump function is called instead of creating an object with the .data attribute.

The solution

This is a breaking change and requires the codebase to be updated. Remove all .data accessors from the dump() outputs. For example:

users = user_schema.dump(query_result, many=True).data

# Will become

users = user_schema.dump(query_result, many=True)

Some thoughts

I don’t know why such a big support change is released as a bug fix version 0.10.0 to 0.10.1. It at least should be released as 0.11 in my opinion. If I could go further I would say wrappers for libraries or software in general should always follow the parent’s version number to the minor version. If Marshmallow is in 3.2.x then it makes sense for Flask-Marshmallow to be in 3.2.x. That provides a better idea of what we are using and what are the changes we need to account for.

Parsing & Validating JSON in Flask Requests

This is a follow up to the previous article: Simplifying JSON parsing in Flask routes using decorators

In the previous article we focused on simplifying the JSON parsing using decorators. The aim was to not repeat the same logic in every route adhering to the DRY principle. I will focus on what goes on inside the decorator in this article.

I ended the last article with the following decorator (kindly see the implementation of @required_params in the previous article)

@route(...)
@required_params({"name": str, "age": int, "married": bool})
def ...

where we pass the incoming parameters and their data types and perform their type validation.

Using an external library for validation

The decorator implemented above is suitable for simple use cases. Now, consider the following advanced use cases

  • What if we need more complex validations like Email or Date validation?
  • What if we need to restrict a field to certain values? Say role should be restricted to (teacher, student, admin)?
  • What if need to have custom error messages for each field?
  • What if the value of a parameter is an object with its own set of validation rules?

Solution for neither one of them is going to be trivial enough to be implemented in a single & simple decorator function. This is when the external libraries come to our rescue. Libraries like jsonschema, schematics, Marshmallow ..etc., not only provide functionality, they would also bring more clarity to the codebase, provide modularity and improve the readability.

Note: If you are already using a serialisation library like marshmallow in your project to handle your database Models, you probably knew all this. If you don’t use a serialisation library in your project and instead have something like to_json() or to_dict() function in your models, then you SHOULD consider removing those functions and use a serialisation library.

Example application

Let me layout an example use case which we use to explore this. Here we have a simple app that has two routes which accepts JSON payload. The expected JSON data is described in the decorator @required_params(...) and the validation is carried out inside the decorator function.

from flask import Flask, request, jsonify
from functools import wraps
app = Flask(__name__)
users = []
def required_params(required):
def decorator(fn):
@wraps(fn)
def wrapper(*args, **kwargs):
_json = request.get_json()
missing = [r for r in required.keys()
if r not in _json]
if missing:
response = {
"status": "error",
"message": "Request JSON is missing some required params",
"missing": missing
}
return jsonify(response), 400
wrong_types = [r for r in required.keys()
if not isinstance(_json[r], required[r])]
if wrong_types:
response = {
"status": "error",
"message": "Data types in the request JSON doesn't match the required format",
"param_types": {k: str(v) for k, v in required.items()}
}
return jsonify(response), 400
return fn(*args, **kwargs)
return wrapper
return decorator
@app.route('/')
def hello_world():
return 'Hello World!'
@app.route("/user/", methods=["POST"])
@required_params({"first_name": str, "last_name": str, "age": int, "married": bool})
def add_user():
# here a simple list is used in place of a DB
users.append(request.get_json())
return "OK", 201
if __name__ == '__main__':
app.run()
view raw app.py hosted with ❤ by GitHub

Requests and responses

Now let us sent an actual request to this application.

decorator_correct_request

Now that is really nice, we have got a “201 Created” response. Let us now try with a wrong input. I am setting the married field to twice instead of true or false, the expected boolean values.

decorator_wrong_request

That returns a 400 Bad request as expected. The decorator has tried validating the types of the input and has found that one of the values is not in the expected format. But the error message itself is kind of crude:

  1. It doesn’t actually tell us which parameter is wrong. In a complex object this might lead to wasting a lot of time jumping between the docs and the input to guess and find the wrong parameter.
  2. The data types are represented as Python class types like class 'int'. While this might convey the intended meaning, it is still far better to say something decent like integer instead.

Using Marshmallow for validation

Using the marshmallow we can define the schema of our expected JSON.

from marshmallow import Schema, fields, ValidationError

class UserSchema(Schema):
    first_name = fields.String(required=True)
    last_name = fields.String(required=True)
    age = fields.Integer(required=True)
    married = fields.Boolean(required=True)

Now that the marshmallow will take care of the validation, we can update our decorator too:

def required_params(schema):
    def decorator(fn):

        @wraps(fn)
        def wrapper(*args, **kwargs):
            try:
                schema.load(request.get_json())
            except ValidationError as err:
                error = {
                    "status": "error",
                    "messages": err.messages
                }
                return jsonify(error), 400
            return fn(*args, **kwargs)

        return wrapper
    return decorator

And finally we pass an object of the UserSchema instead of a list of params in the decorator:

@app.route("/user/", methods=["POST"])
@required_params(UserSchema(strict=True))
def add_user():
    # here a simple list is used in place of a DB
    users.append(request.get_json())
    return "OK", 201

Note: I have passed the strict=True so that Marshmallow will raise a ValidationError. By default marshmallow 2 doesn’t raise an error, however, it should be happening in version 3. Check your version to see if the strict parameter is necessary.

With the app updated, now let us send a request and test it.

marshmallow_error_1.png

Good. We get the “not a boolean” validation error. Now what if we have multiple errors?

marshmallow_error_2

Sweet, we get parameter specific error messages even for multiple errors. If you remember our original implementation, only one error could be returned at a time because the first exception would return a response. Using the library provides a good upgrade from that.

By defining multiple schemas for the various data models that we expect as an input, we could perform complex validations independent of our view function. This gives us clean view functions that handle just the business logic.

Conclusion

This post only exposes the basic implementation of using the libraries and simplifying parsing and validation of incoming JSON. The marshmallow library offers a lot more like: complex validators like Email and Date, parsing subset of a schema by using only, custom validators (age between 30-35 for example), loading the data directly into SQLAlchemy models (check out Flask-marshmallow)..etc., These can really make app development easier, safer and faster.

Schematic is another library offering similar functionalities that focuses on ORM integration.

Simplifying JSON parsing in Flask routes using decorators

Flask is simple and effective when it comes to reading input parameters from the URL. For example, take a look at this simple route.

@app.route("/todo/<int:id>/")
def task(id):
    return jsonify({"id": id, "task": "Write code"})

You specify a parameter called id and set its type as int, Flask automatically parses the value from the URL converts it to an integer and makes it available as a parameter in the task function.

But it becomes harder when we start working with JSON being passed on as inputs when we build APIs.

@app.route("/todo/", methods=["POST"])
def create_task():
    incoming = request.get_json()
    if "task" not in incoming:
        return jsonify({"status": "error", "message": "Missing parameter 'task'"}), 400

    tasks.append(incoming["task"])
    return "Task added successfully", 201

The above method requires the JSON input to contain task parameter in order to create a new task. So it has to check if that parameter is send during the request before it can add the task to the task list. This is simple to implement for just a few parameters. In the real world the APIs aren’t always simple. For example, if you envision an address book API, you probably have multiple fields like first name, last name, address line 1, address line 2, city, state, zip code…etc., and writing something like

if "first_name" not in incoming:
    ...
if "last_name" not in incoming:
    ...

is going to be tedious. We can perhaps take a more pythonic approach and write the logic as:

@app.route("/address/", methods=["POST"])
def add_address():
    required_params = [
        "first_name", "last_name", "addr_1", 
        "addr_2", "city", "state", "zip_code"
    ]
    incoming = request.get_json()
    missing = [rp for rp in required_params if rp not in incoming]
    if missing:
        return jsonify({
            "status": "error",
            "message": "Missing required parameters",
            "missing": missing
        }), 400

    # Add the address to your address book
    addresses.append(incoming)
    return "Address added successfully", 201

As you write more routes, you will start to notice that the missing and if missing logic repeating itself in all the places where we are expecting JSON data. Instead of repeating the logic over and over, we can simplify it by putting it in a decorator like this:

def required_params(*args):
    """Decorator factory to check request data for POST requests and return
    an error if required parameters are missing."""
    required = list(args)

    def decorator(fn):
        """Decorator that checks for the required parameters"""

        @wraps(fn)
        def wrapper(*args, **kwargs):
            missing = [r for r in required if r not in request.get_json()]
            if missing:
                response = {
                    "status": "error",
                    "message": "Request JSON is missing some required params",
                    "missing": missing
                }
                return jsonify(response), 400
            return fn(*args, **kwargs)
        return wrapper
    return decorator

Now we can write the same add_address route like this:

@app.route("/address/", methods=["POST"])
@required_params("first_name", "last_name", "addr_1","addr_2", "city", "state", "zip_code")
def add_address():
    addresses.append(request.get_json())
    return "Address added successfully", 201

Here is how it has changed

json_decorator_diff

The required_params decorator will do the job of checking for the presence of parameters and returning an error. We can add the decorator to any routes that requires JSON parameter validation.

If we put in some more work, we can even expand the logic by specifying the datatypes of those parameters pass a dictionary like this:

@route(...)
@required_params({"name": str, "age": int, "married": bool})
def ...

and in the decorator perform the validations

def required_params(required):
    def decorator(fn):
        """Decorator that checks for the required parameters"""

        @wraps(fn)
        def wrapper(*args, **kwargs):
            _json = request.get_json()
            missing = [r for r in required.keys()
                       if r not in _json]
            if missing:
                response = {
                    "status": "error",
                    "message": "Request JSON is missing some required params",
                    "missing": missing
                }
                return jsonify(response), 400
            wrong_types = [r for r in required.keys()
                           if not isinstance(_json[r], required[r])]
            if wrong_types:
                response = {
                    "status": "error",
                    "message": "Data types in the request JSON doesn't match the required format",
                    "param_types": {k: str(v) for k, v in required.items()}
                }
                return jsonify(response), 400
            return fn(*args, **kwargs)
        return wrapper
    return decorator

With this if a JSON field is sent with the wrong datatype an appropriate response will be returned as well.

PS: I found this full blown decorator function with custom error messages and validations after I wrote this post. Check it out if you want even more functionality.

Adding Unique Constraints After the Fact in SQLAlchemy [Copy]

This post is originally from https://skien.cc/blog/2014/01/31/adding-unique-contraints-after-the-fact-in-sqlalchemy/. But the URL is throwing a 404 and I could access the page only from the Google cache. I am copying it here in case it goes missing in the future.

Update:

Replacing image in a PDF with Python

Being a freelancer is an interesting role. You come across a variety of projects. I recently worked on a project involving replacing images in a PDF which taught me a couple of things.

  1. While there are a number of tools to deal with PDF in Python, the general purpose tools can only do so much because… reason 2
  2. PDF is a dump of instructions to put things in specific places. There is no logical way it is done that make general purposes tools manipulate the PDF in a consistent way.
  3. Not everything is bad. Almost all positive changes like adding text or image and whole page changes like rotating, cropping are usually possible and so are all read operations like text, image extraction ..etc.,
  4. The issue is when you want to delete something and replace it with something else.

With that learnt, I set out to achieve the goal anyway.

Step 1 – Understanding the format

Humans invented the PDF format, which means they used words to describe things in the file, which means we can read them. So opening a PDF file in a text editor like VIM will show something like this.

PDF in VIM

Without getting into the entirety of the PDF spec, let us see what this means. PDF is a collection of objects. There is usually an identifier like int int obj followed by some metadata and then a stream of binary information starting with stream and ends with endstream and endobj. A image in our case would be represented as

16 0 obj
<< /Length 17 0 R /Type /XObject /Subtype /Image /Width 242 /Height 291 /Interpolate
true /ColorSpace 7 0 R /Intent /Perceptual /BitsPerComponent 8 /Filter /DCTDecode
>>
stream
Image binary data here like ÿØÿá^@VExif^@^@MM^@*^@^@^@^H^@^D^A^Z^@^E^@^@
endstream
endobj

So to successfully replace an image we will have to replace the image binary data and the metadata like width and height.

Step 2 – Uncompressing the PDF and extracting the images

Use a PDF manipulation called toolkit called PDFtk.

pdftk sample.pdf output uncompressed.pdf uncompress

What this command does is, it uncompresses the file and makes it easier to read and manipulate. Let us open the uncompressed.pdf in VIM to see the difference.

uncompressed pdf

Step 3 – Identifying the image to replace

PDF is essentially a collection of objects and a PDF file might contain multiple images, there is no way to identify a particular image in the binary data of the PDF file (unless you are from Matrix). We will have to first extract the images from the PDF and match the PDF object to the image using its metadata like height and width. To do that install pdfimages command-line tool (part of poppler-utils) and run pdfimages -list uncompressed.pdf. This will list all the images in the PDF with their metadata.

page   num  type   width height color comp bpc  enc interp  object ID x-ppi y-ppi size ratio
--------------------------------------------------------------------------------------------
   1     0 image     277   185  icc     3   8  jpeg   yes       11  0   113   113 69.2K  46%
   1     1 image     277   185  icc     3   8  jpeg   yes       10  0   113   113 31.9K  21%
   1     2 image     242   291  icc     3   8  jpeg   yes       12  0   112   112 55.2K  27%

Next extract all the images in their original formats using

pdfimages -all uncompressed.pdf image

That extracts the files and names them after the prefix we provided like this image-000.jpg image-001.jpg image-002.jpg.

Now open your images check their file’s height, width and file size and mark the details for the one to replace. In my case the file details were:

  • height – 185
  • width – 277
  • size – 70836

There are two images which matches the height and width, thankfully they have different file sizes.

Step 4 – Identifying the object in PDF that represents the image

I opened the uncompressed.pdf in VIM and searched for the most unique value I have found for the image – its size.

identifying the image object

Now we can identify the object identifier, in this case it is 11 0 obj.

Step 5 – Replacing the image with another image

Now the job is to switch the object 11’s image data with our image’s data. You can use the following Python script to achieve that.


import sys
import os
from PIL import Image
# Include the \n to ensure extact match and avoid partials from 111, 211…
OBJECT_ID = "\n11 0 obj"
def replace_image(filepath, new_image):
f = open(filepath, "r")
contents = f.read()
f.close()
image = Image.open(new_image)
width, height = image.size
length = os.path.getsize(new_image)
start = contents.find(OBJECT_ID)
stream = contents.find("stream", start)
image_beginning = stream + 7
# Process the metadata and update with new image's details
meta = contents[start: image_beginning]
meta = meta.split("\n")
new_meta = []
for item in meta:
if "/Width" in item:
new_meta.append("/Width {0}".format(width))
elif "/Height" in item:
new_meta.append("/Height {0}".format(height))
elif "/Length" in item:
new_meta.append("/Length {0}".format(length))
else:
new_meta.append(item)
new_meta = "\n".join(new_meta)
# Find the end location
image_end = contents.find("endstream", stream) 1
# read the image
f = open(new_image, "r")
new_image_data = f.read()
f.close()
# recreate the PDF file with the new_sign
with open(filepath, "wb") as f:
f.write(contents[:start])
f.write("\n")
f.write(new_meta)
f.write(new_image_data)
f.write(contents[image_end:])
if __name__ == "__main__":
if len(sys.argv) == 3:
replace_image(sys.argv[1], sys.argv[2])
else:
print("Usage: python process.py <pdfile> <new_image>")

view raw

process.py

hosted with ❤ by GitHub

Download the file, change the OBJECT_ID value, save the file and run:

python process.py <your pdf> <new image>

I just used the one of the extracted images to replace another one. So here are the before and after images.

image replaced pdf

Step 6 – Compressing the file back (OPTIONAL)

Do this only if you really need to do it for some reason. It is usually cool to just use the uncompressed file.

pdftk uncompressed.pdf output replaced.pdf compress

Python Technical Interview – An Experience

As a freelancer one of the things that comes with getting a project/job is handling technical interviews. I have so far managed to convince the client with a work sample, test project …etc., This is literally the first time I sat for a full technical interview. And it did teach a few lessons. Let me document it for future use.

It started off with the basic of the language:

1. What is the difference between an iterable and an iterator?

Vincent Driessen provides a clear explanation of the difference with the examples here https://nvie.com/posts/iterators-vs-generators/

As an aside, he has a number of posts which are really great like his Git workflow model that I have used in my projects. Bookmark it

2. What is a Context Manager? What is its purpose? How is it different from a try…finally block? Why would you use one over another?

Context Manager are functions/classes that allow us to allocate and release resources as required. Used with the with keyword in code.

The difference between context manager and try..finally block is explained in technical detail here: https://stackoverflow.com/questions/26096435/is-python-with-statement-exactly-equivalent-to-a-try-except-finally-bloc

But a simpler more practical difference is given by Dan Bader: https://dbader.org/blog/python-context-managers-and-with-statement

3. Can you tell me some advantages of Python over other languages?

I rambled something like, it is is easier to read and write. The file structure (I should have said modules/packages) is great. Even modern iterations of Javascript are copying the import from syntax. Native implementation of a lot of things in standard library…etc.,et.,

But the thing my interviewer was looking for were the words “automatic garbage collection” because the next question was

4. How does Python handle memory?

Python has automated memory management and garbage collection.That is why we never worry about how much memory we are allocating like C’s malloc `calloc functions.

5. Do you know how Python does that? Do you know about GIL?

sheepish smiles and saying no’s ensued. I ran into an issue a few months back, I think maybe with a DB connection issue or something which led me on a rabbit hole that ended with GIL. I should have learnt it that day.

Anyway, here is the article about Python’s memory management. https://realpython.com/python-memory-management/

6. Have you worked on projects involving multi-threading? What do you know about multi-threading?

I hadn’t. Someday maybe I will.

7. Can you explain in detail the steps involved in a form submit to response cycle in detail?

https://developer.mozilla.org/en-US/docs/Learn/HTML/Forms/Sending_and_retrieving_form_data

8. How does the browser know where your server is when the information is submitted to a particular URL?

DNS servers – IP resolution

9. The server sends back text as a string how do you see colorful information in browser?

The text is converted into DOM elements which are rendered by the browsers rendering engine.

10. If a browser is showing unreadable character and question marks instead of displaying the information what could be the reason?

Document Encoding mismatch. The server might send the data encoded in Unicode UTF-8 and the browser might be decoding it as ASCII or LATIN-1 resulting in weird characters and question marks being rendered in the browser.

11. You said Unicode and UTF-8 what is the difference?

Unicode is the term used to describe the character set. If it is encoded with 8 bits it is called UTF-8, if encoded with 16 bits it is called UTF-16 etc.,

For deep dive into Unicode (a must): https://www.joelonsoftware.com/2003/10/08/the-absolute-minimum-every-software-developer-absolutely-positively-must-know-about-unicode-and-character-sets-no-excuses/

12. What kind of request does the browser make to a server? And what are the types of requests that can be made?

Browsers make a HTTP requests. The types are GET, POST, PUT, DELETE, HEAD, OPTIONS ..etc., (I think I said UPDATE instead of PUT, silly)

13. What is the difference between `==` and `===` in JavaScript?

StackOverflow: https://stackoverflow.com/questions/523643/difference-between-and-in-javascript

Some other questions, that were asked:
1. Do you know Docker? Have you used AWS?
2. Do you know Data Base schema design?
3. You have a SQL query that takes a long time to execute. How would you begin to make it faster? Do you know about Query optimisation and execution plans?

QGIS – Creating new column from existing using Python

Yesterday, I was working on the ward level parks map of Chennai I had to join a CSV data layer with the boundary polygon layer, but there was one issue while my CSV file has the ward numbers as integers (1,2,3..etc), the polygon layer had them as strings (Ward 1, Ward 2, Ward 3 …etc.,) So I was thinking, wouldn’t it be nice just to strip the word Ward and put it in a new column, so that I can make a join by matching the ward numbers. Turns out Python integration in QGIS is so good that, I did it without even searching the internet. Here is how.

  1. Open the Attribute table
  2. Open Field Calculator.
  3. Enter the “Output field name”
  4. Switch to “Function Editor”
  5. Click the [+] button to create a new function file.
  6. Changed the function name, parameter and return the value after stripping “Ward ” from the string. Read the docs given below the function editor to understand what’s going on the file.
QGIS Field Calculator
QGIS Field Calculator
from qgis.core import *
from qgis.gui import *

@qgsfunction(args='auto', group='Custom')
def strip_ward(name, feature, parent ):
    return name.split(" ")[-1]

Now switch back to the Expression tab and call the function to calculate the new field

strip_ward.png

Click OK. Now the new field with the computed value would be created.

I had a simple use case, by one can use the power of Python to calculate anything from existing data and generate a new field based on it. I was really blown away by the level of Python integration in QGIS.

Creating an Icon for my blog

When I moved the site from Jekyll to WordPress, I was asked to create a site icon by WordPress. I was trying to play around with the letter from “t” from my screen name “tecoholic” in a couple of Vector editors using different fonts, handrawn symbols …etc., and finally landed on what I know best. Write a Python script for it. So here it is, my blog icon and the generator. It is just a stacking of “T”s but somehoe looks like the corner of ancient Chinese houses.

#!/usr/bin/env python
"""
A script to generate SVG icon for the personal blog.
"""
import svgwrite

width = 256
height = 256
mtop = mbottom = mright = mleft = 256/8

dwg = svgwrite.Drawing(filename="blog_icon.svg", size=(height, width))

def draw_pattern(width, color):
    xpos = 256/8 + mleft
    ypos = 256/8
    increment = 256*2/8
    vlines = dwg.add(dwg.g(id="vlines", stroke=color, stroke_width=width, stroke_linecap="round"))
    hlines = dwg.add(dwg.g(id="vlines", stroke=color, stroke_width=width, stroke_linecap="round"))
    while (xpos < 256*7/8):
        vlines.add(dwg.line(start=(xpos,ypos), end=(xpos, 256 - mbottom)))
        hlines.add(dwg.line(start=(mleft, ypos), end=(xpos+mright, ypos)))
        xpos += increment
        ypos += increment

draw_pattern(20, "black")
draw_pattern(8, "white")

dwg.save(pretty=True)

That creates the SVG, then it is just using imagemagick to create png files of all required sizes:

#!/usr/bin/env bash
python blog_icon_generator.py
convert -background none blog_icon.svg blogo_256.png
convert -background none -resize 512x512 blog_icon.svg blogo_512.png
convert -background none -resize 128x128 blog_icon.svg blogo_128.png
convert -background none -resize 64x64 blog_icon.svg blogo_64.png
convert -background none -resize 32x32 blog_icon.svg blogo_32.png

blogo_256

Python Tip #9 – sorting

Sorting is simplified in Python with sorted(). You can even sort with complex rules.

>>> strings = ['alice', 'bob', 'donald', 'cathy']
>>> sorted(strings)
['alice', 'bob', 'cathy', 'donald']

>>> sorted(strings, key=len)
['bob', 'alice', 'cathy', 'donald']

>>> def secondchar(word):
...    return word[1]

>>> sorted(strings, key=secondchar)
['cathy', 'alice', 'bob', 'donald']

Python Tip #8 – reducing looping by using dicts

In situations where you have a list of objects and have to retrieve then in random order, dictionaries can act as lookup tables.

users = list of class User 

# Get users one by one by looking up ids
user_1 = next((u for u in users if u.id == user_1_id), None)
user_2 = next((u for u in users if u.id == user_2_id), None)
...

# Simpler solution using lookup table
lookup = dict((u.id, user) for u in users)
user_1 = lookup[user_1_id]
user_2 = lookup[user_2_id]
...

This tip is not very obvious, hence this explanation:

user_1 = next((u for u in users if u.id == user_1_id), None)

This method employs a iterator looping through the list of users every time we have to find a user, which means we have to run this loop a hundred times. This poses a complexity of O(N2).

lookup = dict((u.id, user) for u in users)
user_1 = lookup[user_1_id]

This method on the other hand iterates through the users list one time and create a lookup table that we can again and again without having to iterate through the list every time. This reduces the complexity to O(N) which could theoretically lead up to 10 times faster execution of the program.