Goodbye! Brave

Final Update (moved old updates to bottom)

Brave supports Chrome extensions. The problem was with the author’s version of Brave; it was roughly a year old. Very old versions of Brave didn’t include service keys (necessary for interacting with Brave’s privacy-preserving proxy-service), whereas modern versions do (which is why you and I are able to install extensions without any issue)

Sampson from Brave

To explain – the place I have installed Brave hasn’t made available any newer versions of the browser since December 2020. So the keys it shipped with have become outdated. Since no-update was available, I didn’t see the usual orange “Update” button on the taskbar.

⚠️ NOTICE: Closing comments as they have moved from discussing the issue to attacking me for not being crypto friendly.

Original Post

I have been using the Brave Browser for almost 2 years I think. @logic introduced it to me at some point and it has been my primary browser both in Desktop and Mobile, home and office computers since then.

I got my first heads up when I came across a post on HackerNews about Brave misbehaving due to the “Brave backend servers” being unreachable. It struck me as strange when a comment on the Github ticket mentioned that Brave servers need to be up for Brave to function.

This is a big design NO-NO for something as essential as a web browser. But then, the inertia of it being a daily driver, its amazing ad-blocking and tracker protection, Chrome extension compatibility, and the fact I haven’t faced any such issues prevented me from doing any changes.

Today I was looking to install an extension to manage the browser tabs and I ran into this

Can’t install any extension

I thought maybe the extension was buggy and tried a couple more and the same result for everything. And searching for the error led me to this Github Ticket, which again describes that it is a “server-side” issue and it was fixed.

Well, it is not fixed for me. But that’s beside the point. This amount of dependency for a browser to have on “backend servers” is ridiculous. For software, as important as a browser, through which I have come to access almost everything digital for me is unacceptable. So with this post being the last thing I will do on Brave, I will bid goodbye.

Exploring options…

  1. An interesting alternate is Vivaldi – It is trying to do what Opera was doing pre-Chrome. It rolls email, calendar, RSS reader, browser all into one and also provides built-in ad-blocking.
  2. Open source Chrome aka Chromium – This used to be my primary dirver before. So I am thinking of going back to it with the usual extensions like Ghostery, AdBlock+..etc., Not sure how much things have changed there.

Update:

Not sure who posted this in HackerNews. Thanks for all the feedback.

  1. I will be trying Firefox. So many people have recommended it. It’s something I have forgotten over the last couple of years and before that it frequently caused issues and was only my secondary browser for testing.
  2. There is nothing sinister about the decision or PR at work. I tried installing extensions, it didn’t work, I uninstalled and made a note of why I am doing it. Interpretations are all yours.

Update 2:

This is for people suggesting I jumped the gun and probably didn’t take the time to understand the real problem. I am an Chrome extension author myself, I had just published a new version of it only 8 hours before and tested installation on Brave and Chrome. So, I understand the issue. And I have linked to GitHub issues where this has been discussed.

Featured on TheNextWeb & Lifehacker

Something really cool happened this week. I will let the tweets to take over.

… and that’s how I made it to the homepage of TheNextWeb.

… and Lifehacker

Source code of the extension: https://github.com/tecoholic/Just-Arrived

For Chrome: Chrome Webstore

For Firefox: https://addons.mozilla.org/en-GB/firefox/addon/just-arrived-ff/

What did I learn from this?

The most important thing I learnt while doing this is probably the fact that the extension architecture is standardised across Chrome and Firefox. Thanks to Shrinivasan for asking me to port it to Firefox.

But, I think the relationship is one sided. Firefox can work with extensions written for Chrome, but Chrome won’t work with extensions written for Firefox. This is due to the nature of Firefox’s API and the fallback it offers.

For example, the storage api on Firefox is storage.* whereas on Chrome it is chrome.storage.*. Since Firefox has fallbacks for all the chrome.* API, the code primarily written for Chrome works without modifications on Firefox. But if a developer writes the plugin first for Firefox, it would lose the namespacing and therefore won’t work.

More technical details here at MDN web docs: Building a cross-browser extension

Special thanks @tshrinivasan for pushing me to build it for Firefox to @SuryaCEG for the UX advised and @IndianIdle for writing the article.

Two Days with Python & GraphQL

Background

An web application needed to be built. An external API will give me a list of information packets as JSON. The JSON has the information and the user object. The application’s job is to store this data in a local database and provide an user interface to sort and filter this data. Simple enough.

GraphQL kept coming up on on the internet. A number of tools were saying they support GraphQL in their home pages and was making me curious. The requirement also said:

use the technology of your choice REST/GraphQL to build the backend

Now, I had to see what’s it all about. So I sat down read the docs and got a basic understanding of it. It made total sense theoretically. It solved a major problem I face when building Single Page Applications and the Backed REST APIs independently. The opaqueness of incoming data and the right method to get them.

Common Scenario I run into

While building the frontend, we assume use the schema that the backend people give as the source of truth and build it based on that. But the schema becomes stale after a while and changes need to be made. There are many reasons to it:

  • adding/removal/renaming of an attribute
  • optimisations that come into play, which alter the structure
  • the backend API is a third party one and searching and sorting are involved
  • API version changes
  • access control which restricts the information contained..etc.,

And even when we have a stable API, there is the issue of information leak. When you working with user roles, it becomes very confusing very quickly because a request to /user/ returns different objects based on the role of the requester. Admin sees different set of information than a privileged user and a privileged user sees a different set of data than an unprivileged one.

And more often than not, there is a lot of unwanted information that get dumped by APIs on to the frontend than what is required, which sometimes even lead to security issues. If you want to see API response overload take a look under the hood of Twitter web app for example, the API responses have a lot more information than what we see on screen.

Twitter_API_Response

Enter GraphQL

GraphQL basically said to me, let’s streamline this process a little bit. First we will stop maintaining resource specific URLs, we are going to just send all our requests to /graphql and that’s it. We won’t be at the mercy of the backend developers whim’s and fancies about how to construct the URL. No more confusing between /course/course_id/lesson/lesson_id/assignments and /assignments?course=course_id&lesson=lesson_id. Next, no, we are not going to use HTTP verbs, everything is just a POST request. And finally no more information overload, you get only what you ask. If you want 3 attributes, then you ask 3, if you want 5 then you ask 5. Let us eliminate the ambiguity and describe what you want as a Graphql document and post it. I mean, I have been sick of seeing SomeObject.someAttribute is undefined errors. So I was willing to put in the effort to define my requests clearly even it meant a little book keeping. Now I will know the exact attributes that I am going to work with. I could filter, sort, paginate all just by defining a query.

It was a breath of fresh air for me. After some hands on experiments I was hooked. This simple app with two types of objects were the perfect candidate to get some experience on the subject.

Day/Iteration 1 – Getting the basic pipeline working

The first iteration went pretty smooth. I found a library called Graphene – Python that implemented GraphQL for Python with support for SQLAlchemy, I added it to Flask with Flask-GraphQL and in almost no time I had a API up and running that will get me the objects, and it came with sorting and pagination. It was wonderful. I was a little confused initially, because, Graphene implements the Relay spec. So my queries looked a little over defined with edges and nodes than plain ones. I just worked with it. I read a quick intro about Connections and realised I didn’t need to worry about it, as I was going to be just querying one object. Whatever implications it had, it was for complex relations.

For the frontend, I added Vue-Apollo the app and I wrote my basic query and the application was displaying data on the web page in no time. It has replaced both Vuex state management and Axios HTTP library in one swoop.

And to help with query designing, there was a helpful auto completing UI called GraphIQL, which was wonderful.

Day/Iteration 2 – Getting search working

Graphene came with sorting and filtering inbuilt. But the filtering is only available if you use Django as it uses django-filter underneath. For SQLAlchemy and Flask, it only offers some tips. Thankfully there was a library called Graphene-SQLAlchemy-Filter which solved this exact problem. I added that and voila, we have a searchable API.

When trying to implement searching in frontend is where things started going sideways. I have to query all the data when loading the page. So the query looked something like

query queryName {
  objectINeeded {
    edges {
      nodes {
        id
        attribute_1
        attribute_2
      }
    }
  }
}

And in order to search for something, I needed to do:

query queryName {
  objectINeeded(filters: { attribute_1: "filter_value" }) {
   ...
}

And to sort it would change to:

query queryName {
  objectINeeded(sort: ATTRIBUTE_1_ASC, filters: { attribute_1: "filter_value" }) {
   ...
}

That’s okay for predefined values of sorting and filtering, what if I wanted to do it based on the user input.

1. Sorting

If you notice closely, the sort is not exactly a string I could get from user as an input and frankly it is not even one that I could generate. It is Enum. So I will have to define an ENUM with all the supportable sorts and use that. How do I do that? I will have to define them in a separate GraphQL schema document. I tried doing that and configured webpack to build them and failed miserably. For one, I couldn’t get it to compile the .graphql files. The webloader kept throwing the errors and I lost interest after a while.

2. Searching

The filters is a complex JSON like object that could support OR, AND conditions and everything. I want the values to be based on user input. Apollo supports variables for that purpose. You can do something like this in the Vue script

apollo: {
  myObject: {
    gql: `query GetDataQuery($value1: String, $value2: Int) {
      objectINeed( filters: [{attr1: $value}, {attr2: $value2}] {
        ...
      }
    }`,
    variables() {
      return { value1: this.userInputValue1, value2: this.userInputValue2 }
    }

This is fine when I want to employ both the inputs for searching, what if I want to do only one? Well it turns out I have to define a different query altogether. There is no way to do an optional filter. See the docs on Reactive Queries.
Now that was a lot of Yak shaving I am not willing to do.

Even if I did the Yak Shaving, I ran into trouble on the backend with nested querying. For example what if I wanted to get the objects based on the associated user? Like my query is more like:

query getObjects {
  myObject {
    attr1
    attr2
    user(filters: {first_name: "adam"}) {
    }
  }
}

The Graphene SQLAlchemy documentation said I could do it, it even gave example documentation, but I couldn’t get it working. And when I wanted to implement it myself, the abstraction was too deep that I would have to spend too many hours just doing that.

3. The documentation

The most frustrating part through figuring out all this was the documentation. For some reason GraphQL docs think that if I used Apollo in the frontend, then I must be using Apollo Server in the backend. Turns out there is no strict definition on the semantics for searching/filtering, only on the definition of how to do it. So what the design on the backend should match the design on the frontend. (Now where have I heard that before?) And that’s the reason documentation usually shows both the client and server side implementations.

4. Managing state

An SPA has a state management library like Vuex, Redux to manage application state, but with GraphQL, local state is managed with a GraphQL cache. It improves efficiency by reducing the calls to the server. But here is the catch, you have to define the schema of the objects for that to work. That’s right, define the schema as in write the models in GraphQL documents. It is no big deal if your stack is fully NodeJS, you can just do it once and reference it in both places.

In my case, I will have defined my SQLAlchemy models in Python in the backend, and I will have to do it again in GQL for the frontend. So changes have to be synced between them if anything changes. And remember that each query is defined separately, so I will have to update any query that will be affected by the changes.

At this point I was crying. I has spent close to 8 hours figuring out all this.

I gave up and rewrote the entire freaking app using REST API and finished the project including the UI in the next 6-7 hours and went to bed at 4 in the morning.

Learning

  1. GraphQL is a complex solution for a complex problem. You can solve simple problems with it but the complexity will hit you at some point.
  2. It provides a level of clarity in querying data that REST API doesn’t, but it comes with a cost. It is cheap for cheap work and costly for larger requirements. Almost like how AWS bills raise.
  3. No it doesn’t provide the kind of independence between the backend and frontend as it seems like on the surface. This might by lack of understanding and not the goal of GraphQL at all, but if you like me made this assumption, then just know it is invalid.
  4. Use low-level libraries to implement GraphQL, and try to keep it NodeJS. At least for the sake of sharing the schema documents if not for anything. If I has implemented the actions myself instead of depending on Graphene and adding a filter library on top of that, I would have fared better.

Moving back from Mac to Windows + Linux

Content Warning: Rant ahead

As my Macbook Air is becoming more and more restrictive to the things I could do, due to low memory of 4 GB and 128GB SSD, I decided to buy a new laptop with better specifications. After some filtering and comparison on Flipkart and Amazon, I finally settled on Lenovo S540 14″ with 8 GB RAM and 1TB SSD. It also came fitted with a 2GB Graphics card which I think will help working with ML algorithms easier. While the hardware is great for my requirement, the software is a complete let down.

Issue 1: Windows Font Rendering is Crap

The screen is a full HD 1920×1080 display in 14 inches. One would think the display would try to match that of the display of my Macbook Air (1440×900), but nope. Not in a million chance.

The system recommends a scaling of 150% for good results, anything below that the system font Calibri starts breaking down and there seems to be no anti-aliasing effect.

There are a couple of solutions to this problem, like setting the scaling to 100% and increasing the font size separately. This works to a certain degree, but doesn’t achieve the smoothness of 150% scale.

Now I have an interface that seems to be adjusted for my Grandma’s failing eye sight.

Issue 2: Microsoft loves Linux – My Feet

I think the whole MS loves Linux non-sense started almost the same time I bought a Macbook. So I never experienced what it meant. I now get what it meant, they wanted to sell Linux machines on their Azure cloud and that’s about it. Whatever contributions they must have done, should have centered around that goal. Because installing Linux in a Windows 10 machine is more difficult now than it was 5-8 years ago. Back then, it was just a matter of knowing how to partition disks and ability to choose the boot disk. Now I had to:

  • Create the bootable disk in a specific format for UEFI compatibility
  • Run a command to change the Storage access method from RST to ACHPI
  • Go into BIOS and disable Secure Boot, and change the ACHPI
  • Boot into Safe Mode so that the disk can work with changed storage
  • Finally boot into install disk and install.

What should have taken me 15-30 minutes took me 2 and a half hours.

Issue 3: Windows 10 is a Data Collection Pipeline

I am really horrified at the number of buttons that I had to turn off during the setup process and I still find as I use the system.

Issue 4: Application Management in Windows

Windows Store is a disaster, I don’t know what is installed in my system and what isn’t. There are tiles for games that aren’t installed and there is no way to differentiate between a tile of an installed application and a tile of a shortcut for an application that is recommended for install.

Issue 5: Why are tiles in Start Menu?

With 150% scale, it always feels like I am seeing only a part of the actual screen when the tiles come up. I don’t understand how MSFT understood that they should go back to the start menu but decided they will keep the tiles nonetheless. Either tile or don’t, consistency please, the mashup is nuisance and everybody should just learn to live with it.

Issue 6: Application Management in Ubuntu

So everybody has been bit by the centralised application distribution model. But tell me which serious software actually gets published? At least none of the ones I seem to use, even in the Mac OS ecosystem which started the stores concept. MS Office, Adobe Creative Suite, IDEs like PyCharm, Android Studio, Eclipse, Browsers… everything is package download from vendor sites. But that hasn’t stopped Canonical from creating Snap store. Now I seriously don’t know why there is a software centre and also a Snap Store and there is good old apt package manager.

The Good Bits in Linux

It’s been 24 hours of hell with the new system. Yet, not everything is bad.

  • Once up and running, I haven’t encountered Wifi or Bluetooth driver issues.
  • The Kernel seem to be pretty stable.
  • Grub has themes and OS selection is stylish.
  • Memory usage is pretty low
  • Font rendering and antialiasing is spot on. I think I just need some time to get used to 16:10 to 16:9 aspect ratio
  • The drivers for the Graphics card are in place
  • Tap to click and Natural Scrolling keep my UX is same across both my machines

Conclusion

After a frustrating 24 hours of the setting up the system. I have completely given up on Windows. As ususal Linux will be my primary OS. Will turn to Windows for recording tutorial videos or when collaboration required MS Office, or maybe games. If money wasn’t an issue, I don’t think I would have moved from Mac to PC at all. Things like 3 finger application switching, desktop switching are still etched in me. So, personally I prefer

  1. MacOS
  2. Ubuntu
  3. Windows… I would try my best not to boot this thing.

Python Technical Interview – An Experience

As a freelancer one of the things that comes with getting a project/job is handling technical interviews. I have so far managed to convince the client with a work sample, test project …etc., This is literally the first time I sat for a full technical interview. And it did teach a few lessons. Let me document it for future use.

It started off with the basic of the language:

1. What is the difference between an iterable and an iterator?

Vincent Driessen provides a clear explanation of the difference with the examples here https://nvie.com/posts/iterators-vs-generators/

As an aside, he has a number of posts which are really great like his Git workflow model that I have used in my projects. Bookmark it

2. What is a Context Manager? What is its purpose? How is it different from a try…finally block? Why would you use one over another?

Context Manager are functions/classes that allow us to allocate and release resources as required. Used with the with keyword in code.

The difference between context manager and try..finally block is explained in technical detail here: https://stackoverflow.com/questions/26096435/is-python-with-statement-exactly-equivalent-to-a-try-except-finally-bloc

But a simpler more practical difference is given by Dan Bader: https://dbader.org/blog/python-context-managers-and-with-statement

3. Can you tell me some advantages of Python over other languages?

I rambled something like, it is is easier to read and write. The file structure (I should have said modules/packages) is great. Even modern iterations of Javascript are copying the import from syntax. Native implementation of a lot of things in standard library…etc.,et.,

But the thing my interviewer was looking for were the words “automatic garbage collection” because the next question was

4. How does Python handle memory?

Python has automated memory management and garbage collection.That is why we never worry about how much memory we are allocating like C’s malloc `calloc functions.

5. Do you know how Python does that? Do you know about GIL?

sheepish smiles and saying no’s ensued. I ran into an issue a few months back, I think maybe with a DB connection issue or something which led me on a rabbit hole that ended with GIL. I should have learnt it that day.

Anyway, here is the article about Python’s memory management. https://realpython.com/python-memory-management/

6. Have you worked on projects involving multi-threading? What do you know about multi-threading?

I hadn’t. Someday maybe I will.

7. Can you explain in detail the steps involved in a form submit to response cycle in detail?

https://developer.mozilla.org/en-US/docs/Learn/HTML/Forms/Sending_and_retrieving_form_data

8. How does the browser know where your server is when the information is submitted to a particular URL?

DNS servers – IP resolution

9. The server sends back text as a string how do you see colorful information in browser?

The text is converted into DOM elements which are rendered by the browsers rendering engine.

10. If a browser is showing unreadable character and question marks instead of displaying the information what could be the reason?

Document Encoding mismatch. The server might send the data encoded in Unicode UTF-8 and the browser might be decoding it as ASCII or LATIN-1 resulting in weird characters and question marks being rendered in the browser.

11. You said Unicode and UTF-8 what is the difference?

Unicode is the term used to describe the character set. If it is encoded with 8 bits it is called UTF-8, if encoded with 16 bits it is called UTF-16 etc.,

For deep dive into Unicode (a must): https://www.joelonsoftware.com/2003/10/08/the-absolute-minimum-every-software-developer-absolutely-positively-must-know-about-unicode-and-character-sets-no-excuses/

12. What kind of request does the browser make to a server? And what are the types of requests that can be made?

Browsers make a HTTP requests. The types are GET, POST, PUT, DELETE, HEAD, OPTIONS ..etc., (I think I said UPDATE instead of PUT, silly)

13. What is the difference between `==` and `===` in JavaScript?

StackOverflow: https://stackoverflow.com/questions/523643/difference-between-and-in-javascript

Some other questions, that were asked:
1. Do you know Docker? Have you used AWS?
2. Do you know Data Base schema design?
3. You have a SQL query that takes a long time to execute. How would you begin to make it faster? Do you know about Query optimisation and execution plans?

Sensible Test Data

I am currently working on a project called Peer Feedback, where we are trying to build a nice peer feedback system for college students. We use Canvas Learning Management System (CanvasLMS) API as the data source for our application. All data about the students, courses, assignments, submissions are all fetched from CanvasLMS. The application is written in Python Flask.

Current Setup

We are mostly getting data from API, realying it to frontend or storing it in the DB. So most of our testing is just mocking network calls and asserting response codes. There are only a few functions that contain original logic. So our test suite is focused on those functions and endpoints for the most part.

We recently ran into a situation where we needed to test something that involved the fetching and filtering data from API and retriving data from DB based on the result.

Faker Library and issues

The problem we ran into is, we can’t test the function without first initalizing the database. The code we had for initializing the CanvaLMS used the Faker library, which provides nice fake data to create real world feel for us. But it came with its own set of problems:

Painful Manual Testing

While we had the feel of testing realworld information, it came with real world problems. For e.g., I cannot log in as a user without first looking up the username in the output generated during initialization. So I had to maintian a postit on my desktop, use search functionality to find the user I want to test and copy his email and login with it.

sensible_test_data_1

Inconsistency across test cycles

When we write our tests, there is no assurity that we could reference a particular user in the test with the id and expect the parameters like email or user name to match. With the test data being generated with different fake data, any referencing or associaation of values held true only for that cycle. For e.g, a function called get_user_by_email couldn’t be tested because we didn’t know what to expect in the resulting user object.

Complex Test Suite

To compensate for the inconsistency in the data across cycles, we increased the complexity of the test suite, we saved test data in JSON files and used it for validation. It became a multi step process and almost an application on its own. For e.g, the get_user_by_email function would first initialize the DB, then read a json file containing the test data and get a user with email and validate the function, then find the user without an email and validate it throws the right error, find the user with a malformed email… you get the idea. The test function itself not had enough logic warranting a test of test suites.

Realworld problems

With the realworld like data came the realworld problems. The emails generated by faker are not really fake. There is a high chance a number of them are used by real people. So guess what would have happened when we decided to test our email program 🙂

Sensible Test Data

We finally are switching to a more sensible test data for our testing. We are dropping the faker data for user generation, and shifiting to the sequencial user generation system with usernames user001 and emails [email protected]. This solves the above mentioned issues:

  1. Now I can login without having to first look it up in a table. All I need to do is append a integer to the word user
  2. I can be sure that user001 would have the email [email protected] and that these associations will be consistent across test cycles.
  3. I no longer have to read a JSON file to get a user object and test related information. I can simply pick one using the userXXX template, reducing complexity of the test suite.
  4. And we won’t be getting emails for random people asking us to remove them from mailing lists and probably we are saving ourselves being blacklisted as a spam domain.

Conclusion

Faker provided us with data which helped us test a number of things in the frontend, like the different length in names, multi part names, unique names for testing filtering and searching etc., while also added a set of problems that makes our work difficult and slow.

Our solution to having a sensible dataset for testing is plain numerically sequenced dataset.

Update

Using the generic name tags like user was still causing friction as we have multiple roles like teacher, TAs, students …etc., So I improved it further by creating users like student0000, ta000, `teacher00“.

Facebook Account Deletion

I just now deleted my Facebook account.

Why?

  1. Freedom – Day by day using it makes me feel as if someone is dictating me how and what I should be talking to people about. The rampant, out of the blue censors by law enforcement and judiciary are the reasons. Even though I have never been a part of any such action, the atmosphere is getting too toxic over there in that domain.
  2. Irrelevancy – Most of the content that I encounter on my timeline are completely irrelevant to me and the only content that I engage with are blogs like Lifehacker and posts of a set of very few friends, both of which I think I can do without having to endure a non-free environment.
  3. Privacy – It has been a great concern for me ever since I became aware of its importance and ethics. Somehow kept telling myself, nothing would affect me. I think underestimating risks is something I am unwilling to do these days after reading “Fooled by Randomness“.
  4. Paranoia – Closely connected to “Privacy” – I was always thinking what if someday I discover that all the stuff that I have shared is sold/shared to someone and that someone uses it for purposes I never suppose it to be? Am I planning murder? No. But what if someone frames me for it? I am NOT paranoid by nature, I walk 1 among the billion+ in the country. But again, not all billion+ walk around recording data about what they read, where they go and what they think.
  5. Narcissism – One of the biggest effect of Facebook on character is, I think, breeding narcissism. It breeds a kind of self importance and provides an easy sense of achievement and forces a person to project a personality, real or otherwise. This becomes especially stressful when considering the fact when one is connected to all sorts of people from high school friends to working colleagues. A few recent encounters with people has left a bad taste in  mouth about the whole “sharing” thing.
  6. Spam – Spam, spam, spam – there is just no end to it. I am generally very efficient in ignoring Ads and spam because all that I look for is content from people and never click any link other than blog-posts. But most of the time genuine and original content is very hard to come by and all sorts of false claims on history, technology, identity creep in along with celebrity, movie and all sorts of eye-ball catching stuff.

After considering all this, the most logical decision to make seems to be to leave the system and reduce a lot of responses that brain generates due to unnecessary stimuli.

Will I ever get on a social network again?

I currently do not have an answer to that. There are a lot of things to be considered as listed above and see how they play out to make a decision. With the current turmoil of internet censorship across the world, misuse of it as tools of destruction, authoritative control, capitalist bait holder, and as an enterprise looking to make money indirectly by using my data, I don’t think I am getting onto it (FB) any time soon. I specifically mean FB here because, there exists an alternate platform called Diaspora* which deals with all of my above concerns (okay, maybe not narcissism), but  jumping into such a platform is pointless if I don’t really have the people I want to interact.

The Setup

Continuing from the previous post, let me write down everything that defined my work setup.

The Curriculum

This is where the fun starts. I worked with two different curriculum throughout the year. The school wanted me to teach the mandated Samacheer syllabus and the organisation that I work for wanted me to teach to the Common Core Standards. The Samacheer part contained the Social, Science and the “English as a subject” subjects to be taught, the organisation gave me Mathematics  and “English as a standard” subjects to teach. It is actually painful to be a teacher and teach language either as a subject or as a set of standards. More on that separately sometime later (which is almost never).

The Red Ink

There were 37 notebooks for each (3) Samacheer subject to be checked and corrected 3-5 times in every term. Each term itself is about 3 or 4 months. And there where 2 mid-term term tests and 1 end of term test for each term. For the organization’s part, we were supposed to conduct Unit Assessments, which is one in 6 weeks, Weekly assessments and if possible Daily Assessments. I just did the Unit Assessments. Tried Weekly assessments but dropped it after a couple of weeks, it was getting out of hands. English made up for it, by making me correct a set of at least 10 questions every alternate day. I remember sitting, standing, sleeping, walking and even jumping on/off trains with my bag on the shoulder, papers on the left hand and red pen on the right.

The Sessions

The organisation’s way of making sure we are fully equipped to handle everything in the classroom. It was usually planned in the evenings after school when we are in our lowest glucose levels and looking out for a corner to curl. The sessions did make a lot of sense to the people who were organizing them. They were usually about how to teach, how to handle kids, how to understand a particular area in order to deliver it the way it is supposed to be. But one thing no one seemed to care/understand/grasp was there was no single way to do stuff.

The Printer

Canon LBP2900. One trademark of being a TFI fellow is we print more paper for each kid than what government or the school would. Having a laser printer really does help. One can be free of the timing restrictions imposed by the Xerox shops and save a lot more money. I printed about 8000-9000 pages in the last 4 months alone. 1500 rupees for all that paper and 400 rupees for the toner and the immense flexibility of being able to print whatever and whenever.

The Travel

The travel was two/three legged. I usually started off with short bus ride 5E/23C/49 from Adyar depot to Madhya Kailash, took a train from Kasthuribai Nagar station to the Beach Station, and then finally took 44C from Beach Station to the Power House stop. Sometimes the 5E-Train combo was replaced by the 21H/PP19 from Adyar Depot to Parry’s Corner. Initially used 6D from the backside of Adyar Depot, but extra 300m walking and having no alternate buses made me switch to other options. One thing good about the train travel is I always found space to sit and even work on the laptop if required. Having a monthly season ticket for just 105 rupees was another boon. Never had to worry about tickets/queues and oversleeping during return journeys.

 

These define the physical boundaries of how I worked in the past one year. But how did I actually work? What was “The process”?

The Idea of Democracy

Before I begin, (can be skipped)

I have a few things going inside my brain as if spun like the saw dust in convection experiment. Though mostly ignored, these things tend to jump in between a conversation or tend to pull conversations towards them so that they can show themselves. This has resulted in a number of “long pause” moments in my conversation. This poor blog is the place where I decided to pour them down, so this process of convection can stop.

The Scene

Most Indians would have had “democracy” explained to them them, as such, in their schools via the standard definition of “by,of,for the people”. What it means in real life, though is understood later in life when circumstances set right. I understood it when I watched a 70s, probably even 50s or 60s, movie. I don’t remember the entire story line, but the scene which gave me idea of democracy sticks to this moment. The scene unfolds as follows:

A group of rebels fight for the freedom and liberation of their land from the clutches of the king, they call evil and his rule tyranny, and establish democracy. On a particular day, they get hold of a bodyguard of the king. He is brought in chains and made to stand before the council of the rebel leaders, who are sitting over a table, discussing strategies. The bodyguard reveals nothing to the council’s questions and proclaims his loyalty to the king. The head of the council tells his men to throw him in a dark cell. As the men holding the chain wait for the council members to leave before they can take the bodyguard away, a man brings dinner to the leader and places it on the table and spills the drink on him. The leader becomes furious at this indignation in front of a enemy and beats the man (a servant in his mind). At this point, the bodyguard laughs aloud and asks:

“Is this what you call democracy? Is this the new world order you are fighting for? If this the example of the society you are fighting to create, my motherland is better served by my King than your democracy.”

And is pulled out of the room as soon as he finishes his rhetoric. The leader stands at the now empty hall, except for him and the servant now curled in a corner, stunned at what has just happened. He turns towards the man in the corner, goes and hugs him uttering “We are all one, we are all equal..” and the scene fades away.

The interpretation

No other song, no other writing, no other painting, no other teaching, and no other anything has made me understand the meaning of democracy as this scene from an unknown movie. It essentially captures what every true believer in democracy fears, “autocracy”. Although none of the governments in the world would accept it, each one them is autocratic with varying levels of autocratic influence. Things like occupy movement have tried their best to expose this wolf in sheep clothing, little has transpired to reality.

Having said that, I do not mean to take the stand of a socialist by condemning free economy and enterprises, nor do I support the capitalist propaganda of talking in terms of wealth. But I believe there exists enough moral ground between the two in order to explore and settle. And probably the correct interpretation of democracy lies somewhere in that space.