So now you can take my money if I keep it in cash and travel with it, or if I deposit it to a bank in under $10k increments, and if it's more than that, that's also suspicious. I guess just having money is suspicious these days... we're all supposed to spend it on fancy phones.
Overall, it just feels like the IRS felt jealous of the civil forfeitures.
The article hints that civil forfeiture may be the mechanism used here as well - if i had to pick a single policy to change in the USA it would be civil forfeiture. It just seems so glaringly unfair, surely both sides of the aisle could agree on this one
The problem is police departments across the country use civil forfeiture to fund day to day activities and it's a policy that gets a candidate the support and endorsement of law enforcement at the municipal and state level. https://www.nytimes.com/2014/11/10/us/police-use-department-...
The situation was depicted in the fictional TV show The Shield fifteen years ago but it turns out, the truth is not worse than the crazy stuff they told in those fictional stories.
Those things are not illegal per-se but can be signals of things resulting from illegal transactions. So, they are more a heuristic --but not following procedure because it can be used to evade detection is made illegal.
The one thing I don't understand is why some of those figures rarely keep up with inflation. I think they should. 10,000 isn't what it used to be.
> The IG took a random sample of 278 IRS forfeiture actions in cases where structuring was the primary basis for seizure. The report found that in 91 percent of those cases, the individuals and business had obtained their money legally.
That's not a very good signal. Even if the numbers were reversed and 91% had obtained their money illegally, I'd say that a 9% false seizure rate would still be way too high.
It isn't okay to use a rough tool to steal. The IRS is stealing from people. They need proof people have this money illegally or they are an outright criminal who is stealing.
This is kind of an ironic claim on a community that believes so strongly in "black boxes" like machine learning. Golly, if their heuristics point in a particular direction isn't that a ~signal~ with predictive power and market value?
Take heed of this the next time you extoll the virtues of blind inference - whether human or machine. Maybe the next time it'll be _your_ soap dispenser that doesn't recognize black people. Or redlines black people and keeps them from getting a credit card, or mortgage, etc.
Because the reality is - there are all kinds of signals like these that are real. People in certain areas are more likely to be poor, to default on their loans, etc - that's a real signal, just like these high-value financial transactions. People doing big transactions in cash are way more likely to be involved in crime - just like people in certain areas are way more likely to default or whatever.
But relative risk says nothing about the false positive rate. People do cash transactions all the time, legitimately. Poor people manage to pay their mortgage without defaulting all the time.
Also - the reason people are trapped in that area in the first place is social, and economic. The decision to value a particular signal - or trust a model that blindly ignores confounding factors - ultimately lies with you, the human creating the model. Never hide behind it.
The machine is just a machine, it just does math for you. It only calculates the model you tell it to. If you let your model be racist, that's on you. You designed it that way. You designed it to be racist.
The creation of the "black box" model is one of the scariest things we've done in the last couple decades. Because now, nobody can tell you why it happened, it just was a bunch of neurons that attributed various weights to factors in your profile. If you're lucky they'll even be able to tell you what some of the weighting factors might be (but usually not). And now it's nobody's fault, it's just ~the model~. It's a blatant abdication of any responsibility for the faults or consequences of your work.
It's problematic in any situation where a programmer sets up model X and lets a mathematical model run on the input, and then accepts the result without social consideration, handing it over to a register jockey who must blindly accept the output without any ability to countermand the model.
It used to be that if you tried to get a loan there was a loan officer in your town who had the final say. Someone who knew the locals and could override the "model" based on human knowledge. The Kennedys are good for it, their business is still solid enough, the Johnsons are really unreliable and I wouldn't do that.
I'm much more concerned with the "false positives" here than the "false negatives". The loan officer can give a few bad loans based on his gut feeling... and then he's going to lose his job. There's a direct feedback loop on them. Someone who doesn't get the loan to expand their business is going to suffer much more immediately and much more deeply.
That's what I'm fundamentally against - the abolition of the "loan officer" in this situation, a human who can countermand the models when they're obviously wrong. At the end of the day these are truly just classifier models and there's no guarantee that any given output is valid for a given input - someone has to maintain the feedback loop and keep training the model back.
And not only that but these aren't meaningless "training runs", each one can potentially screw up someone's life. So again, big consequences for error here.
And indeed the socially-just answer may not even be the mathematically correct one. Is there actually a check that your training model doesn't discriminate against black people? If you weight that to zero, are you sure it isn't going to start picking up the addresses where they live instead? Or names?
The problem with black-box models is they are designed to identify arbitrary or hidden features. Even if you forbid one feature they often will just find another proxy. That's what the models are supposed to do, actually. That's super problematic when there's nobody around to tell the model "no", and it can ruin someone's life.
I'm picking on black people as an example here because redlining is a blatantly obvious case of a rational individual decision with massive social consequences. But you can substitute in "high risk financial transactions" like handling lots of cash if you like. Those are pretty obviously prone to false positives just like redlining.
Frankly there's a lot of things about yourself that have recently become "public knowledge" that you probably don't want a government agent to analyze with a blunt instrument. For example, the USPS saves "mail covers" i.e. addressing information for all letters/parcels in the US. Or, based on commercial information that can be gathered without a warrant (reddit/HN datasets are on BigQuery let alone actual ISP- or forum-level data which can be subpoena'd - similarly courier services like UPS/Fedex can be subpoena'd without a warrant) they could analyze users to see what topics they post about on the internet. Post a lot about drug legalization? Our model says that's suspicious. Don't mind the dogs, they're just sniffing.
This argument really doesn't hold water to me. It seems dramatically more likely to me that the human agent overriding the model is going to be prejudiced than the mathematical model itself.
Of course it's possible (in fact, almost certain) that a math model trained on a large set of data is going to pick up on some problematic features. However, is it really more likely that these statistical inferences are more biased than a human being?
I'm sorry, but in my experience the number of racist human beings outweighs the number of racist computers.
Your examples seem so fraught. The Johnsons are unreliable, from a human, seems as likely to mean that John Johnson and Mr. Overriding Agent's sister had a nasty breakup as it does to mean they're likely to bounce checks. The Kennedys are good for it just sounds like code for, The Kennedys are of the racial group Agent prefers.
I agree with you that we can't blindly follow computer models, but I don't think I follow you to your conclusion that the loan officer was a valuable safety net.
But that loan officer brings their own bias to the scenario. They could just as easily say the Johnson's are unreliable because they are black. It wasn't that long ago that saying that was institutionalized and I still suspect it occurs. An algorithm is colorblind. This isn't to say an algorithm is necessarily good, but that humans aren't either. One of the reasons bureaucratic red tape exists is as an effort to overcome individual judgement in favor of consistent and fair judgement. So though there's human intervention or policy considerations that need to occur in this situation due to massive abuses and obvious unfairness, I don't think it damns the mathematical model as a concept.
That's really what I'm arguing overall, mathematical models are great as a second opinion, or even a first opinion, but at some point you do really need a human in the loop to say "nah that doesn't make sense".
The nice thing about this, versus - say - a self-driving car - is that you don't have to make a decision within 0.4 seconds before the car crashes. There is plenty of time to get a human into the loop here, and to show the human why the model thinks what it does. Models that can't explain themselves, well, that's going to be of limited social value unless the need is absolutely dire like the car scenario. If the car gets me out of a crash, great, but I would want to know why my loan was denied.
If you can't put it into words or show a plot of the classifier... well... condorcet voting hasn't taken off either, everyone seems to prefer IRV so far even if it's "less optimal".
We have plenty of experience with "if you screw up as loan officer too much you lose your job". That's a pretty well-proven model. Especially now that you have an additional guidance signal on when the person is going "off the reservation" so to speak. Be a loose cannon too much and you're going to be the first guy carrying their stuff out the door in a box.
Frankly I'm much more enthused about the reverse here - I want to see the model tell me who's a bad loan officer given the hand they're dealt, not what a bad loan is. We can even incorporate social outcomes into that scoring metric.
It's a lot like teachers. Would you try to black-box yourself to the perfect textbook? Or would you be better off trying to figure out who's the struggling teacher in a shitty school and who's the lazy teacher half-assing it in the rich 'burbs?
Having cash can be suspicious and there's cases where agents just confiscate the money until you prove it's "clean". There was an article here years ago where they did this to poker winners - even had a website where cops traded tips. Follow target around, wait for some cause to pull them over, then take the cash.
If he didn't want this sort of nonsense to be a front and center policy of the DOJ, then he would have appointed someone else.
What about Jeff Sessions? Policy wise, he's also a supporter of this policy, although he doesn't have the proven track record of using it as a tool like Lynch did. Maybe there's a small ray of hope there, but I doubt it.
I work on software banks use to look for fraud or money laundering. Fraud always made sense, since there was a feedback loop with the end customer to confirm transactions were legit or not. We could adjust our algorithms from there.
I don't think the same exists for BSA. The main incentive for banks is to avoid fines for not reporting suspicious activity and the Fed doesn't let the bank (or us) know whether the suspicious activity submitted turned out to be legal or not. I would love to reduce the number of false positives from our system, but have no way to build a proper training set. Sounds like the IRS doesn't know what it should really be targeting either.
There is sort of a feedback loop based on the reports you do file, whether the Fed ends up accepting or complaining about them, and also (sadly) news reports. It works OK.
Monitoring structuring mostly works fine at larger banks. The problem here is that the IRS doesn't have the same network of analysts and other quality control measures. At a bank, someone can take their money elsewhere and be held liable. The IRS has a smaller budget and much less incentives (very hard to switch governments) and so there's a failure.
TL;DR Do not try this at home: "The IRS’s own internal watchdog found that the IRS had a practice of seizing entire bank accounts based on nothing more than a pattern of under-$10,000 cash deposits."
Many small business insurance policies only cover <$10,000 in cash on hand. So plenty of legit businesses get flagged for trying to maintain their coverage and minimize risk.
I worked for a record store in college and we had a Ticketmaster machine (back when they were actual ticket printers). We could only accept cash for tickets, per Ticketmaster policy. So on days that big concerts and sporting events went on sale (like The Grateful Dead, Charlotte Hornets, etc) we would make several large deposits of this cash throughout the day to keep our theft risk under our insurance limit. Under modern structuring/forfeiture laws, we could have lost all that money to the feds and still owed Ticketmaster for the sales we made - because the insurance policy had an exception for government seizure.
I would think the vast majority of cash deposits are under $10k, so I assume what they mean is repeated deposits of close to, but slightly under, $10k.
You would think that, but that's not what is happening. A gas station was doing ~$800 per day and got grabbed. This is the type of stuff the really makes people lose faith in a rational system.
No person shall, for the purpose of evading the reporting requirements of section 5313 ...
But the report indicates that the IRS made no attempts to determine whether the supposed "structuring" was done for the purpose of evading the reporting requirements, and simply ignored any evidence the showed that the pattern of deposits had a legitimate explanation.
You didn't need to know about it to be guilty of structuring. I'd posit that most folks who get hit by this crap likely didn't know about it until their money was gone.
One can hope. Still, it just amazes me how the police forces of this country can just take actions known to cause even more public distrust and continue them when they know they are in the wrong morally if not legally.
Stories like this make it hard to maintain the illusion that government employees are public servants, except in the sense that cowboys are servants to cattle.
91% of 278 randomly-sampled cases were wrongfully seized. Jesus.
> More troubling, the report found that the pattern of seizures — targeting businesses that had obtained their money legally — was deliberate.
That part is not surprising to me, given how many cases were wrongful, that it was deliberate. There is no oversight, except for occasional investigations like this one.
This will not result in rolling back this power, though. They'll just put "stronger oversight" in place, to make sure the wolves guarding the hen house are on their best behavior.
The problem isn't their policy, the problem is the law that allows this. These asset grabs are clearly unconstitutional violations of the 4th amendment - that doesn't change just because of a law. They're taking assets without a warrant or even probably cause. If you want to take the drug money after you've proven and convicted someone of a drug offense then maybe (that's a different debate). But this is simply authorized theft. The justification really isn't there (excuses are there).
What gets me is the audacity of making a suspicious activity that may be an indication of a crime, the crime itself. It's as if they outlawed having a suspiciously high power bill, or looking over your shoulder a lot while you browse items in a store.
It seems to me that data processing capacity must have increased to the point where they could just reduce the reporting level to zero (ie every transaction is reported) and so remove the entire need for rules against "structuring".
I wonder if you would just be better off refusing these kinds of actions as a private citizen. I'm pretty sure you could make enough noise about it that it wouldn't be worth their while. Same with those videos of police officers trying to cease cash from people after pulling them over. I would just refuse to give it to them. Are they going to shoot me for money on the side of the road?
> Are they going to shoot me for money on the side of the road?
you're simply not given an opportunity to "refuse". it's usually going to go something like:
1) "mr doe, step out of the car." refuse? you're getting tased and definitely going to jail for refusing a lawful order.
2) "mr doe, i'm going to search your car now." well you're out of your car. you can (and should) say "no", but your car is getting searched regardless. maybe you jump the cop? or shoot at them? now you're definitely going to jail for assaulting the cop, if you even survive the night.
3) they find cash. they bundle it up, pull it out and probably just go stuff it in their car. if they're smart they're not even going to tell you what they're moving until it's locked up in their trunk. if you figure it out and say "hey you can't take that blah blah", they're going to ignore you. so you're back to assaulting them somehow. you're definitely going to jail.
> Same with those videos of police officers trying to cease cash from people after pulling them over. I would just refuse to give it to them. Are they going to shoot me for money on the side of the road?
That depends. How dark is your skin? How nice is your car? How well do you speak?
One of the mitigating factors for police criminality seems to be the apparent race and class of the victim. Most police may not shoot you on the side of the road for your money, but it's a matter of privilege to be able to refuse their demands and not be met with immediate violence.
You can't. From the article they would seize entire bank accounts before even talking with you or giving you a legal notice. They often take physical assets with violent force, too. (E.g. they show up with guns)
I can see how these transactions can trigger an alert but why don't they give people a chance to clarify the situation first before taking action? This seems extremely unproductive and a big burden for someone who can't afford a lawyer easily.
Overall, it just feels like the IRS felt jealous of the civil forfeitures.