Social Security Numbers

It’s been a while since I’ve posted here, and in the time that I’ve been gone, the IRS has expanded its Identity Protection PIN program to all taxpayers, not just confirmed victims of identiy theft.

Normally, an individual taxpayer (i.e., not an organization) would only need to input their Social Security Number to file their income tax returns. Under the program, individuals with stolen identities (and, now, anyone who opts in) would also need a 6-digit PIN mailed to them at the start of every year.

It’s long overdue that the IRS has begun to move away from its old position on the PIN program. Under the old system, it would seem to me that by the time a person would be enrolled in it, it would already be too late to prevent the vast majority of the damage. If a person has to wait until after they have their identity stolen to seek protection, then by the time they do so, the thief could already have filed fradulent tax returns with the IRS, causing a world of trouble for the real person (which would eventually be sorted out but at the great cost of time and stress).

Who would do such a thing? Maybe a disgruntled former spouse/significant other, who would likely be fully aware of your social security number, and decent amounts of other possibly-identifying information (e.g., your adjusted gross income for prior tax years, your phone number and address, etc.). Maybe a professional identity thief who buys your SSN on the dark web for a bargain and uses the threat of fradulent returns to extort you (the elderly may be disproportionately susceptible to such an attack). Maybe that same thief thinks that by filing a fradulent return they can claim a refund in your name, cash that refund, and move on just before the IRS gets wise to the fradulent return. I’m not sure how feasible that last option is, but the matter of fact remains that anyone with your name, date of birth, and SSN can steal your identity. Only the last piece of information is private (name and DOB is public record, at least in Florida), and an honest mistake can quickly make your SSN “public record” too.

They can open bank accounts in your name, use them for money laundering, and have the FBI knocking on your door instead of theirs. They can take out credit in your name, max out their lines, and leave you to foot the bill (not to mention your credit score!). There’s a lot of money to be made in identity theft. It’s rather simple, too.

I recently had the pleasure of applying to various apartment complexes. Most of the leases I was offered had my name, date of birth, and last 4 digits of my SSN as my identifying markers. One of them had my name, date of birth, and first 5 digits of my SSN. Take that lease and any of my other leases and you’ve just stolen my identity.

What’s your redress if this happens? The Social Security Administration will only issue you a new number if the old one is actively being used, which means that if you’ve had your identity stolen once, and you’re currently not experiencing any additional theft, you’ll have to wait until you (inevitably) start experiencing theft again before you can get a new number. And that doesn’t revoke the old number, either. Again, the same issue with the IRS’s old program: by the time you can fix the issue, it’s already too late. The SSA won’t even explain the situation to the credit bureaus, you have to do that, making more headaches and financial stress. All of this is eventually resolvable, but not with a plainly excessive amount of work that could be avoided by just using a better system.

My “better system” would take the IP PIN program and use that as the SSN. At the start of every year, you’ll get a new SSN in the mail. The old one will be revoked 3 months later. That creates a small headache whereby you’ll have to give your new SSN to the people and companies you’re still affiliated with. It avoids the much larger headache that happens when the people you’re no longer affiliated with are hacked and lose your SSN.

And, of course, you can at any point in time for any reason at all get a new SSN and have the old one revoked immediately or 3 months from the time of renewal. If you lose the old number, no biggie, get a new one. If you read in the news that your bank just got hacked and had all their SSNs stolen, no biggie, get a new one. The SSA might even proactively revoke SSNs that companies report as stolen and renew them without you ever asking.

The only other flaw in this process is the question of authenticating you each time you renew. If your SSN is stolen, and your hacker is faster than you, they could renew your SSN in your name and really screw you over. This could happen because all the SSA has to identify you is your name and date of birth. If they also had biometric information, this wouldn’t be necessary. The SSA could have a photo of you and require you to supply a new one every couple of years before you renew, or if you undergo major facial changes. Then, when you elect to renew in the middle of the year, or whenever you change your mailing address, you’ll be able to “freeze” the number instantly but you’ll have to show up in person to complete the process of replacing the number. It’s a lot harder to steal identities when in-person visits are necessary, because most identity theft is conducted exclusively online or by mail.

Alternatively, states and companies that need to verify your identity can just implement their own verification procedures. It would save the taxpayer a decent amount of money if the states and companies independently verified identities, and a federal system could leverage these decentralized verification procedures by “hooking into” them (so now, instead of needing just your name, DOB, and SSN, a hacker would also need the state where you had your SSN issued). We already kind of have this state procedure in the form of state ID cards and driver’s license numbers (although these aren’t always free, and those numbers are also prone to theft, but at least the physical card has your face on it).

In the meantime, enroll in the IP PIN program to save yourself more interaction with the IRS than is strictly necessary. Identity theft can happen to everyone, and if you’re expecting a refund next year, you’d better hope that you can file faster than the other guy.

Distribution Agnosticism: Going Beyond a Separate /home Partition

For the past 8 or so years that I’ve been using Linux, I’ve never found it necessary to create separate partitions for /home, /boot, /tmp and whatever else is an “important” folder you can find in the filesystem’s root directory. This was for two reasons:

  1. Partitioning on drives accessed through proprietary drivers (which I had done for 7 of those 8 years) was a scarring experience that I did not wish to repeat unless absolutely necessary
  2. 99% of my important data is on the cloud anyway

Now I understand that putting /home on its own partition is not only a fantastic idea but also the only true way to experience Linux. And now I’m going to help you understand the same.

How I used to use /home

I was new to the scene, so I made the sensible decision to leave well enough alone. I had all my eggs (home/boot/tmp) in one basket. I figured that those supreme beings, the developers of the Ubuntu operating system, wouldn’t have allowed me to leave everything as one partition if it wasn’t the best option.

And for some cases it sure is the best option. Ephermeral servers, Raspberry Pis (or is it Pies?), and anything else that doesn’t need persistent data also doesn’t need a separate /home folder.

That was not my case, and it probably isn’t the case of anybody who uses Linux as their primary desktop operating system. The data in your home folder contains every single user-specific configuration file. Mail accounts, browser cookies, keychain data - all the data you generated when you logged into your various online accounts. And it is very irritating to set those up every time my home folder data is deleted (either on accident or on purpose).

How I use /home now

I have a LUKS encrypted partition, mounted on boot through /etc/fstab and /etc/crypttab, that contains the contents of my home folder.

Here is every single benefit I’ve discovered by using this system:

  1. I don’t have to rely on eCryptFS, which is useful for encrypting non-separated /home directories at rest, but is also terrible at data recovery
  2. I can multiboot different distributions on different drives - using Arch, Ubuntu, or whatever else I choose - while retaining access to my online accounts and not creating an enormous number of app-specific passwords (I call this “distribution agnosticism”)
  3. If I manage to destroy my root filesystem, my home partition (should be) completely unaffected

The only downside is that migrating to this system takes a long time, depending on how much data is in your /home partition. Thankfully, the accidental destruction of my root filesystem is what motivated me to pursue this option, and as a result I did not have any data to migrate; but if you do prepare for a long and tedious copying operation.

My model of Distribution Agnosticism

My newfound ability to multiboot distributions got me thinking. If I can simply swap out my current distribution for another one, barely sacrificing anything in the process, does that give me the opportunity to take Linux beyond the concept of the distribution?

Prior to this discovery of mine I perceived the distribution as paramount. It held the ultimate power over how I used Linux, how I percieved Linux, and how I interacted with software. Now that I’ve expanded beyond the concept of the distribution, I am made to challenge my perception of the Linux ecosystem, and how I should continue to use it.

My model of distribution agnosticism involves having multiple distribution installations, each for a specific purpose. I can use Elementary for schoolwork, Arch Linux for development, Linux from Scratch for system experimentation, and Quebes OS for accessing sensitive data. This model addresses a great concern of mine whenever I use Linux - the ease of re-tooling a system.

If I switch desktops from GNOME to i3 in Arch Linux, I can go about uninstalling the gnome group and installing the i3 group. But wait: uninstalling the gnome group also uninstalls gnome-keyring, eog, and so much other stuff that I use regardless of the desktop manager at hand. Turns out these utilities are either (1) members of the gnome package group or (2) installed as direct dependencies of packages within the gnome group, without having been explicitly installed. And they aren’t included in the i3 group (which is intentionally minimalist). So now I am confronted with the enourmous task of manually removing all GNOME-related packages that I don’t want, and keeping those that I do want. Simply knowing the difference between the packages in these groups is an immense task, and making the decisions of what I do and don’t want has taken me hours of time that I didn’t ask to spend.

Yes, I could suck it up and just install i3 without removing gnome, but that is inefficient. Why keep unused software? It might even expose me to security vulnerabilites (after all, the more software I have, the more “surface area” a potential attacker has to find a way in).

Maybe the solution isn’t to uninstall gnome and install i3 - maybe the solution isn’t to install i3 at all. Maybe the solution is to create a completely separate instance of the distribution and install i3 on that, and address issues with the setup as they present themselves (such as the initial absence of gnome-keyring). Making this process totally independent of the existing software ensures that I have a fallback in case i3 is utterly incompatible with my way of life.

The development of several highly specialized and independent instances of Linux distributions creates redundancy - breaking changes made to one system have no impact on the others. The extreme separation of these systems can lead to a deeply secure setup, so long as proper sandboxing measures are taken (i.e. why should a distro geared to schoolwork have access to your personal documents?). It also means you don’t have to worry about the aforementioned GNOME effect. The problems that arise from removing software that has deeply enmeshed itself with your system simply no longer exist.

The feasibility of my model of Distribution Agnosticism

I can see this model being implemented with relative ease.

Let’s assume you don’t have a large external storage device, and that your hard drive is a standard 256 gigabytes in capacity. Major distributions - think Ubuntu, Fedora, and the like - don’t take up more than 15 gigabytes with the base system + basic packages (i.e. Chromium, LibreOffice, all the other basic utilities) installed. So, assuming that you want three separate distros, and each will only consume (at most) 20gb, then you need only devote 60gb to these systems three. Your home folder can take up the remaining space (the boot partition is negligible). That sounds pretty simple to me.

The same is possible with external storage. A 64gb SD card will do the trick for the scenario I described above. There will be a (minor, most of the time) performance hit stemming from the fact that the software you use will be loaded from the external storage, but the benefit is that if your external storage fails then your home folder is completely fine (assuming that you keep it on the internal drive, which I would recommend).

Multiboot poses an initial challenge but is easily overcome. Assuming you use vanilla systemd-boot and bootctl, you can easily create multiple menu options based on the UUIDs of the partitions involved. You could probably write a script that does it for you:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
#! /usr/bin/env bash

if [[ $(id -u) != 0 ]]
then
echo "Please run as super/root user"
exit 1
fi

blkid | while read -r line
do
split=( $line )
name=""
uuid=""
for val in "${split[@]}"
do
if [[ "$val" == *"UUID"* ]]
then
uuid=$val
fi

if [[ "$val" == *"PARTLABEL"* ]]
then
name=$val
fi
done

printf "title\t${name}\nlinux\t/vmlinuz-linux\ninitrd\t/initramfs-linux.img\noptions\troot=${uuid} rw" > "$name".conf
done

Please do not take the above script to be sacred; it is entirely untested and simply exists to prove that a script to automate this process could exist. (Note that it requires all partitions to be labeled and assumes all of them will make use of the same vmlinuz-linux and initramfs-linux.img files).

Should you do it?

Yeah, if you have too much time on your hands (like me).

It should be seriously considered by the individual who plans to use Linux in the long term for “important” work. It’ll probably save you from most of the pain and sysadmin headaches associated with dealing with Linux. It’ll certainly save you from the mistakes I made when I was starting out.

Framework Overload: Do Things the Simple Way, Not the Right Way

Lying just beyond the introductory self-guided resources to a language is a vast sea of various solutions built on top of it.

The JavaScript ecosystem is the most notorious for this framework overload - so much so that JS developers often joke that every 5 minutes a new framework is born - but it occurs in every major language. There are tools for dependency managment (i.e. Haskell’s stack vs. cabal vs. Nix issue), alternative syntaxes (i.e. JavaScript’s vanilla vs. TypeScript vs. CoffeeScript vs. PureScript vs. Dart issue), and boilerplate-reducing utilities (i.e. Angular vs. Vue vs. React vs…).

These frameworks more often than not are solutions in search of a problem. The oversaturation of JavaScript as a programming language makes it particularly susceptible to this phenomena.

Which framework do I use?

As a rule of thumb, if five minute’s research cannot tell you which one is right for you, then the answer is none at all.

If five minutes can’t net you an obvious answer, it’s likely because the project you seek to make a reality is unique. That’s not a bad thing, it just means that any framework you choose will force you to modify your project in some way to comply with the protocols set forth by that framework.

“Force” is a strange word to use here because many of these frameworks espouse interoperability with existing codebases. Those same frameworks also tend to have an established set of “best practices” that, when followed, guarantee that the project will compile/build successfully. Without following those best practices, of course, the project offers no guarantees as to the success of your build. So while many of these frameworks will not force you to comply with their standards, they will encourage one set of practices while discouraging other kinds of practices.

This is not necessarily a bad thing so long as your needs line up with the framework’s objectives. If you can’t find a framework that does this within 5 minutes, odds are that it doesn’t exist.

Large Projects

Projects that are beyond more than a couple of weeks in development, or that will need to be maintained for a long time, often benefit more from frameworks than short-term projects. That deeper initial investment associated with learning how the framework operates pays off more if your project is continuously developed over a longer period of time.

But how do we reconcile the 5-minute rule with this obvious larger-project advantage? We do so by migrating mid-development.

As your project takes digital form and exits the realm of your imagination it will be refined. This refinement process, more often than not, includes bringing your code in line with industry best practices. As your code gets closer and closer to the industry norm, it will also get closer and closer to the best practices and standards laid forth by the frameworks.

In other words, it is quite likely that the more you develop your project, the more easily it will fit within the limitations of a framework. It won’t necessarily become less unique, but the codebase will take on a form that more closely resembles the “average” codebase envisioned by the developers of these frameworks.

Shifting Strategies Mid-Development

Depending on how loosely you define “mid-development,” you can commence the process of shifting your strategy as your program enters the middle stage of development (where the main features have been created and you are about to start smoothing out the rough edges) or as your program leaves the middle stage and enters the final stage prior to the production release.

Starting the shift too early can make development more difficult down the line, since you’ll be operating within the limitations of the framework (which you may come to regret, if a feature you wish to develop is egregiously outside the scope of that framework). Do it too late, and the task of migrating the codebase becomes monumental. How do you find the right moment to shift over?

The advice I have in this respect is to start later rather than sooner. It is very annoying to find and replace references, rename file extensions, and whatever else a framework may demand; but it is far more devastating to be forced to shift to a different framework or even to abandon the framework entirely because you picked too early before you knew the entire demands of your project. Of course, in a perfect world, every developer knows the entire demands of their project prior to commencing work on it. But we do not live in a perfect world, and there are no perfect developers.

Hack together your own framework

Frameworks do not have to be an “all-or-nothing” ordeal. Your limits in terms of the frameworks that you can use are entirely determined by your creativity.

An old project of mine was initially developed with Webpack and JS, but in the middle of development I became hooked on TypeScript. I proceeded to convert every single JS file to a strictly-typed TS one, and developed a Makefile that compiled the TS into JS first before then sending Webpack to minify the JS for production deployment (and yes, I know that Webpack has TS support, but that support was at the time strict and required the TS to obey specific parameters that I had no interest in complying with).

GNU’s make utility is probably your best bet at merging many frameworks to create your own Frankenstein’s framework. Any scripting utility works; make is simply the most intuitive (to me, a Linux user, since just about every Linux machine has make installed but not always python or perl).

The moral of the story is that you are the developer. Not the person who designed the framework, not the person who wrote about industry best standards on the Internet, you. Pave your own way so you can make the project you want instead of the project the framework wants. But do so intelligently, to avoid making your project so cumbersome that to enter the codebase is to enter a labyrinth of your own creation.

The Feynman Technique: Learning How to Learn

As our understanding of various subjects increases, so too does their complexity and obscurity to the uninitiated. Physics and mathematics, biology and ecology, computer science and programming - these are all subjects which an average person will shy from out of the sheer complexity of understanding them.

The consequences of this phenomena are dire. The outsider looking into these subjects may be completely put off from delving into the subject. The child who wishes to grow up and be an astronaut may give up their dreams not because they find better interests elsewhere, but because the complexity of astrophysics scares them away. Increasingly in today’s society we are feeling the effects of overspecialization and the death of polymathism - the over-specific knowledge possessed by today’s experts prevents them from joining multiple fields of study and advancing both in ways neither could achieve alone.

But what about these subjects makes them so convoluted and unappealing to the novice?

First Impressions

Everything about a person’s perception of a field of study is how they are introduced to it.

More often than not a person is introduced to a “complex” field - think mathematics or physics - as something uniquely difficult to pursue. Sometimes that happens when a person is just a child, making simple inquiries about the world around them.

Why are people introduced to these subjects in ways that reduce their will to learn about it? I think it’s because of our negativity bias - we tend to remember the most negative aspects of a thing more than the positive aspects. We remember the very difficult math - the sort taught to us by bad teachers - more than the easy math we were taught by good teachers. The crux of the issue is that the “complex” topics are those which are routinely taught poorly.

Why are these subjects taught poorly? That is a more complex issue, but the most relevant answer is that teachers of these subjects do not know how to teach them properly.

Classes such as history, English, and the other liberal arts are vastly different from the sciences in that none of them require a shift in a person’s logical thought processes. For example, the study of English grammar is not understood by the mind in the same way as algebra.

And yet in the majority of cases it seems that these two disparate categories - the liberal arts and the sciences - are taught in nearly identical ways. We cannot teach two different subjects in the same way and anticipate the same learning response.

How do we teach the sciences?

We should teach the sciences scientifically.

  1. Do not operate on assumptions.
  2. Since the first rule means that you can’t assume your students have prior knowledge, explain everything in sufficient and clear detail.
  3. Teach by the discovery principle - allow your students to form the connections between a theoretical problem and a solution on their own (with guidance).

Let me elaborate on those three rules.

The first rule is simple enough: assume nothing. For example, for your first lecture on calculus, do not assume that your students know what a limit is. Do not assume that they know what a tangent or secant line is, don’t assume they know what a slope is, don’t assume they know what a line is, and don’t assume they know what a function is.

At first glance this first rule seems cumbersome, but it is actually the most important of the three. When you stop assuming things about your students’ knowledge, you gain the incredible ability to correct deep-seated misconceptions they may hold about even the most basic principles of a subject. Your students may not have known what a tangent line or secant line was until you explained it. More importantly, students may have only had a practical understanding of the concepts that you explain rather than a conceptual understanding.

Bad teachers often do not properly explain to students why something exists in a subject. Those who fare well under bad teachers do so by developing a practical understanding of the concept - often through the memorization of formulas, acronyms, mnemonic devices, and other tools designed to be applied to specific situations. These students will score highly on the average math placement test not because they actually understand the material, but because they understand how to use it in very specific situations. It is indeed possible to use the material of a subject without understanding it completely, because of the rote nature of test questions (even complex word problems tend to have a discernable pattern which is then associated in the student’s mind with a very specific approach). Scores on these tests will fall drastically if students are ever asked to synthesize knowledge based on a conceptual understanding of the material.

An example of this is a student who knows (via the power-rule) how to calculate the derivative of a function, but does not actually understand what the derivative means or the basis for the shortcut which the student used.

The second rule is the embodiment of (and is subject to) common sense. If you assume nothing about a student’s prior knowledge, you obviously need to explain everything that you assumed they knew. The question becomes “How much?”

Common sense is the only suitable tool here. When explaining the concept of the derivative - the slope of a tangent line - you must naturally explain the concept of the mathematical limit. It is not, however, advisable to explain things such as one-sided limits, or limits going to infinity, or limits of sequences. These things are not necessary to the problem at hand, which is simply “how do you calculate the slope of a tangent line?” They may be necessary on a separate lesson focused entirely on limits and their applications, but applying common sense to this particular lecture should lead you to only explain the things that are directly relevant to the issue at hand.

It is better for me to show you an example of the third rule than to exhaustively explain it. I wrote an article on taking the slope of a tangent line that followed this “by discovery” approach, which was inspired by Douglas Downing’s “Calculus: The Easy Way” (formerly “Calculus by Discovery, “ a much more fitting title in my opinion).

Educating students by having them discover the concepts for themselves is a deeply engaging and effective means to teach them. People who actively employ this approach are forced to confront gaps in their knowledge which would otherwise be obscured by buzzwords (since the rules force them to define and explain these buzzwords) and as a result build a comprehensive understanding of the subject matter.

This is the essence of the Feynman Technique. Express information concisely and simply and you will understand it. Hide behind cumbersome language and gaps in understanding and you will not understand it.

Applying the Feynman Technique to Studying

The principles that apply to educators apply similarly to students. If you, the student, cannot follow the 3 rules of scientific education I’ve laid out, then you don’t understand what you’ve learned (and perhaps you haven’t learned it at all).

A thought experiment: can you explain the concept to a child? Can you strip away the scientific argot and get to the meat of the matter? If you can’t then you don’t understand it!

The first step towards fixing this is to figure out what you don’t know. Examine your own understanding of the topic. Make a list of the things that you do know and another list of the things that you don’t know. Ask yourself to define each and every thing directly related to the topic. Figure out what (lack of) information you’re hiding from yourself, and fix it rather than feeling insecure about it.

How do you fix it? You ask questions. Ask Google, your classmates, your teacher. Asking questions becomes easier the more you do it - so make a point of doing it more. Don’t wait to patch up holes in your knowledge - fix the issue right away.

After you’ve fixed the gaps in your knowledge, the rest is simply working towards expressing your knowledge in a succinct way, using the other two rules.

Concluding Remarks

Do your best to throw away everything you think you know about the difficulty of broaching new subjects. You can broach any subject of any kind - so long as you’re not afraid to confront your own lack of understanding, and are willing to work towards creating it.

Flutter Web, or How to Get a Back-End Dev to do Front-End Work

I was introducted to Flutter in a period of my life where I was experiencing mobile development ennui. The mechanisms for UI development shown to me in Xcode’s Storyboards were clunky and dissatisfying, and the brand new Swift UI framework was too new to account for every possible use case, forcing me to create iOS UIs with a Frankensteinian combination of UI-builder generated code and explicitly programmed scenes.

Flutter showed me what the future of mobile development could look like. The “widget” model, resembling a purely-programmatic implementation of Xcode’s Storyboards, was an absolute delight from the start. The excellent integration with Visual Studio Code (especially the autocompletion) and more documentation than I’ve written in my entire life sealed the deal. Every time I’ve developed a mobile app since then, I’ve done it in Flutter.

Flutter Web only became relevant to me a few weeks ago, when I was approached by a friend with a request to develop a secure contract exchange system as an alternative to email. I gladly obliged, developing my database skills and reuniting with Golang for the backend.

That didn’t change the fact that I had been asked for a website. Not a command-line utility, not a mobile app, but something accessible by web browser. I turned my attention at first to the various frameworks which have littered the web development expanse for ages - Angular, React, Vue and the likes. And then I remembered my good friend Flutter.

What makes Flutter Web so special?

It’s the same Flutter you use to design mobile app UIs… but for the web.

It works in an almost identical manner. You construct widgets (analogous to “scenes”) which give way to other widgets. They can do things like collect user input, display animations, and all the other things you would expect from your typical mobile application.

I said that it works in an almost identical manner because there is one crucial difference: Flutter Web can access the underlying HTML in much the sameway that JavaScript can. You can facilitate file uploads, page refreshes, and HREF navigation directly from within your Flutter codebase.

Is it ready for production use?

Not really.

Flutter Web is only available on the beta channel, which will ideally be a temporary arrangement while it is continuously worked on and refined.

You can enable the functionality for it yourself, by using the below commands:

1
2
3
flutter channel beta
flutter upgrade
flutter config --enable-web

There’s no guarantee that existing projects will be compatible with the web functionality, but you can enable it within a project using this guide.

A relatively surefire way to make a Flutter Web project ready for production use is to design it from scratch as a web application. This is completely against the vision of the Flutter framework (write one codebase, deploy it everywhere) but until web support moves to the stable channel you can’t be sure there’s not a bug lurking deep within your codebase.

Is it promising?

Very.

I am a backend developer, which means that I vastly prefer the logic and frameworks powering APIs and servers in comparison to the logic and frameworks powering the websites that you actually see with your own eyes.

If I, an ardent hater of all things front-end, can be made to enjoy web development with this framework, then it is very promising.

It is dead simple to get started with, highly intuitive, and (with practice) almost natural. I can start a new Flutter project and go 10 minutes without needing to look at documentation, which is not something I can say for Angular or another of those frameworks (I cannot actually start a project in those without looking at the documentation).

If development on Flutter Web keeps progressing as it has recently, and it moves without trouble into the stable channel, I am confident saying that front-end web development will become vastly more simple (at least, for the average back-end developer).

IB vs. AP: What I Learned in High School

High school students who wish to challenge themselves with their course material, earn college credit in advance, or both, have two main options: Advanced Placement (AP) and International Baccalaureate Diploma Programme (IBDP).

My high school experience has led me to conclude that, for my purposes, one of these programs was vastly superior to the other. Before I tell you which one, however, I want you to understand how I came to my conclusion.

AP

Advanced Placement is managed by the College Board, a U. S. based non-profit organization which also manages the School Admission Test (SAT) and Test of English as a Foreign Language (TOEFL).

AP courses generally follow the structure of a traditional U. S. college/university course. Taught out of a textbook designed for the purposes of AP course instruction, the objective of the course is to cover all the material which can be present in the end-of-year examination.

That end-of-year exam takes place in early or mid-May, and lasts anywhere from 1 to 3 hours. It contains a single multiple-choice section, and anywhere from 1 to 3 writing sections. Performance on that exam determines your AP Score, which is on a scale of 1 to 5 (mimicking the U. S. letter grading scale, with 5 representing an “A” and 1 representing an “F”). 3 and above is considered a passing grade, and most colleges/universities in the U. S. will award credit for a 3 or above.

IBDP

The International Baccalaureate Diploma Programme is managed by the International Baccalaureate, a Geneva non-profit foundation which also manages the Career-related and Middle Years Programme.

IB courses follow a less traditional educational approach when compared to AP. IB courses are heavily project based, with in-class portfolios and presentations making up the vast majority of a student’s final grade in the course. Depending on the particular course, end-of-year exams comprise some percentage of the final grade while these in-year projects comprise the rest.

Notably, IB courses (with few exceptions) last for 2 academic years, with projects split between the two years and the final exams taking place at the end of the second year.

The received score is on a scale of 7, with most colleges/universities accepting a 5 or above.

Why is AP better for the American student?

While the IBDP syllabi and project-based educational standards are vastly more engaging (and, perhaps, more informative) than the rote memorization encouraged by AP, the American student is vastly better off if they shun the IB and take AP courses in their place.

From what I’ve been told about what happens behind the scenes, it’s not a difficult decision to make. The IBDP contains fundamental flaws within its implementation, stemming from their approach to grading coursework.

For example, rather than assessing the submitted coursework of every student, the IB marks only a small sample of a class’s submitted coursework. Then, based on how much the teacher’s marks deviated from the IB’s marks, the IB proceeds to reduce the marks of every unassessed student’s coursework by the difference between the IB’s marks and the teacher’s marks. Occasionally, this results in a class’s grades increasing, in the event that the teacher was a harsher grader than the IB; but in the majority of cases every student’s grade is reduced.

The practice of treating each class as a single entity fails to address each student’s individual performance, and penalizes students who may have achieved far better results than the rest of their class. This is especially true for classes taught by lenient instructors - if most students in a class receive good scores on their projects, and the IB samples the worst projects of the class, then students who may actually have deserved high marks will be severely penalized based on the performance of the worst students in the class.

This tactic is as efficient as it is misrepresentative - applying the performance of a small sample of students to an entire class saves hours of grading work for each class. It may very well be necessary for the IB to take this shortcut - grading several different kinds of projects (and the final exam) based on different markschemes is a far more laborious task than grading a single AP exam (half of which is graded automatically since it is multiple-choice).

Regardless of its necessity, the employment of this tactic in determining a student’s final grade means that a student has little control over their academic performance. In comparison, the AP (which relies on a single exam which is individually graded by two separate persons) affords far greater control over a student’s final results.

An additional advantage of the AP is that its final exam can be administered independently of a course. If a student is already proficient in Chinese, Biology, or any number of the subjects administered by the AP, they can purchase the exam for that subject through a school which participates in the AP. They can then sit for that exam and receive a score just as they would if they had taken the course, only they do not waste time by re-learing material that they are already fluent in. This is impossible with the IBDP, which requires coursework to be submitted through a teacher as part of a class. This independent exam administration also means that AP exams can be taken more than once.

Another aspect of the AP to note is that it lasts a reasonable length of time: 1 academic year. Receiving a poor score in AP is less of a waste than receiving a poor score in IB, since less time was wasted in receiving the undesired score.

An international student, or a student seeking admission to an international university, may find that this advice is not suited to their needs. International schools have specific protocols in place for students who participate in the IBDP - specifically those who are expected to receive an IB Diploma (which is a program consisting of several IBDP courses + additional project work). But students seeking admission to American schools will find few (if any) that place significant weight on a student’s receipt of the IB Diploma. Those schools are often more concerned with the rigor of the courses in a student’s transcript.

Advice for NBPS Students

North Broward has been steadily replacing its roster of AP courses with their IBDP equivalents.

This is understandable - the school is increasingly gearing itself for the international market, through the development of its boarding school aspects (i.e. construction of new dormitories). To achieve an internationalized image, the school must also offer internationally-recognized programs, and offering these alongside American equivalents will reduce enrollment for both and strain resources.

That doesn’t mean you’re powerless to place your education within your own hands. Historically, the school has permitted IB students to register for their equivalent AP exams each year (although they must pay for the exam themselves). This is quite likely your best option as a student seeking American school admission.

In the event that no course/exam is offered at North Broward for an AP program that you’d like to participate in, you can purchase a self-study textbook and contact nearby schools (i.e. Monarch) to purchase AP exam seating through them. This will mean, however, that you’ll have to go to another school to take those exams.

So long as you develop a comprehensive academic plan regarding which AP and/or IB courses and exams you’ll take, you should be able to start your college/university experience with a definitive head-start. Avoiding entry-level classes such as English 101 can make your transition from high school to college a much smoother one.

Who am I?

My name is Milo Jonathan Gilad, and I am an Israeli-American Computer Science and Cybersecurity student at the University of Central Florida.

I previously attended North Broward Preparatory School in Coconut Creek, Florida for high school, and the University of Maryland (College Park) for the Summer and Fall 2020 semesters of my undergraduate degree.

I plan to write about projects I’m working on (or have worked on in the past), and miscellaneous pieces of advice that my past self would have appreciated.

If you’re interested in learning more about me, please see my profile website at www.milogilad.com.