Note For Anyone Writing About Me

Guide to Writing About Me

I am an Autistic person,not a person with autism. I am also not Aspergers. The diagnosis isn't even in the DSM anymore, and yes, I agree with the consolidation of all autistic spectrum stuff under one umbrella. I have other issues with the DSM.

I don't like Autism Speaks. I'm Disabled, not differently abled, and I am an Autistic activist. Self-advocate is true, but incomplete.

Citing My Posts

MLA: Hillary, Alyssa. "Post Title." Yes, That Too. Day Month Year of post. Web. Day Month Year of retrieval.

APA: Hillary, A. (Year Month Day of post.) Post Title. [Web log post]. Retrieved from

Thursday, October 19, 2017

This semester, I'm taking a class about Augmentative and Alternative Communication (AAC). There are videos. I do something like liveblogging while watching them, just into Open Office. Now the results are here.

So here's the video:

And here's what I wrote while I watched it:

Video defines AAC as “the use of customized methods and devices to supplement a person's ability to communicate”

[In class we described low tech as limited messages, but pen and paper or board and marker

HI backup systems are important, variety

“Anyone who is unable to speak, or whose speech is difficult to understand.”
[Intermittently this is me, but I actually do sometimes switch over before speech is totally gone, at the point where AAC is more efficient rather than strictly required.]


“Sometimes we find ourselves on the floor or under a desk because that's where somebody wants to be” as a way of noting that there are no behavioral prerequisites for AAC use.
(That's actually concerning that I'm the example here)
(Who didn't get access to communication because of doing the thing I do in grad school?)

Use all the methods. Don't eliminate what's working.
(There are a very few people who can read my body language.)

Least dangerous assumption.

8-12 months in assessment is a while. I get why, I just hope stuff is being tried during that time.

What does the individual want to do? Family and such help and guess if the person can't answer but we want to ask the person. Look at daily life.

Information about prior devices gets lost. So do the prior devices.

Vocabulary to actually have a conversation, rather than only “I want X” is kind of needed to have a conversation.
Is that what the more than just requesting was about? (Also a video on that topic.)

Thursday, October 5, 2017

Diagnostic arbitrariness and NO, not everyone is "somewhere on the spectrum"

["Somewhere on the spectrum" here is "somewhere on the autism spectrum," or variants on the theme of claiming everyone to be a little bit autistic.]

Autism diagnosis can be pretty arbitrary. There isn't a blood test. There are genes that are associated with an increased probability of being autistic, but that's not the same thing as a gene "for autism" or a genetic test. We don't really ask about the internal experience of being autistic, either. Instead, we basically have a behavioral diagnosis: if you do X, Q, and W, but not A or B, then we're going to conclude that autism is the proper label. C makes us wonder if you might really have some other thing, but we won't rule out autism if you don't meet the rest of the criteria for that other thing. (Or, we shouldn't.)

Since autistic behavior is a subset of normal human behavior, this gets messy. Autistic people might tend to stim in characteristic ways, but everybody stims, and sometimes we're just getting in extra trouble for a way of stimming that is actually pretty common. Think about fiddling with a pen or pencil as an example of us getting in extra trouble for something most people will sometimes do.

That means edge problems. Where, exactly, are we putting the line between two neurotypes? The location of the line changes when we change the diagnostic criteria - that's always a big topic of discussion around DSM changes. Telling people who seem to be near an edge that they are definitely on one side or the other of that edge, based purely on external behavior, will lead to mistakes. Some of these mistakes will be harmful.

Any categorization scheme dealing with people has to deal with the reality that no two people are exactly alike. Not every single person is easy to classify. Our nervous systems didn't read the textbooks while wiring themselves! There are people who fit equally well (or equally poorly) in several categories. The problem there is with the textbooks, and the inevitable incompleteness of categorization systems. MASSIVE harm is done when people treat the problem of not fitting the classification system as being with us instead.

Oh, and let's not pretend that everyone diagnosing autism (or any other neurotype) actually understands the neurotypes they're diagnosing. Plus there's problems from people taking advantage of their positions of power, or otherwise acting in bad faith. Sometimes, things are intentionally done wrong.

Now, all of these issues are real. I've seen people use some combination of these issues to argue that everyone is somewhere on the spectrum, and that's where the problem is. "Some people are hard to classify" doesn't mean "everyone is hard to classify" or "everyone is somewhere on the spectrum for neurotype Y." On a similar note, "The person who diagnosed me incorrectly with X didn't understand X or my actual neurotype of Y" is different from "X doesn't really exist" or "Everyone is really Y." In each of those cases, the first statement is true. The second and third statements are not, and actually look a lot like diagnostic arbitrariness themselves. (They can certainly hurt people in similar ways to diagnostic arbitrariness around the edges of definitions.)

Tuesday, September 26, 2017

"Screen time" and some more patterns

Yet another article about screen time is going around. I swear, those things are everywhere. This time it's Temple Grandin (who gets touted as being an autism expert in general when she's actually an expert in livestock, like cows*) talking about limiting screen time for autistic kids. She's actually more nuanced about it than most - the headline says screen time, and she says it once too, but she does specify what "kind" of screen time she means. Most people don't.

So, here's a bunch of things that get lumped under screen time:

1) I have an ereader. I am reading a book (or a paper related to my graduate studies). On a screen!

2) I'm watching a movie. On a screen!

3) I'm playing Pokemon Go, which involves a lot of walking around, but also it's a cell phone game. On a screen!

4) I'm playing a computer game. On a screen!

5) I currently can't talk, so I'm using FlipWriter on my iPad to communicate with my classmates. On a screen!

6) I'm teaching math. It's an online class, which is great because my ability or inability to speak at the time is irrelevant. My "accommodation" of getting to write or type instead of talking, when needed, is already built in to the system. Still. Where am I doing this? On a screen!

7) I'm using the Internet to talk to a friend who lives across the country. On a screen!

Which of these am I supposed to be limiting? Why are we using one category for all of them, if the answer isn't all of them?

Or, which of these will you admit to having a problem with, versus which ones would you actually like to get rid of? Because I think that's part of the why. If you build a category full of things that you don't like, including some things that it's considered OK to take issue with (video games!), you can get away with talking about the whole category as a problem. Build up the apparent size of the "problem" by including numbers from the parts you need to at least pretend are OK (maybe AAC? maybe online classes?), talk about supposed bad effects from one item in the set (video games?) as if they came from the entire set, and then there's clearly a big problem. Ban or limit the whole category.

I'm thinking back to the pattern I talked about with fidget spinners, or a variation on it: 

1) A disabled person needs something for access reasons.

2) Abled people call the thing distracting, because our existence in public is apparently distracting.

3) The thing is either banned entirely or permitted only for people with the paperwork to prove they need it for disability reasons.

4) Disabled people who need the thing either don't have access to the thing or must out themselves as disabled in order to gain access. If outing oneself is required, the thing is heavily stigmatized.
Instead of being banned because it's distracting to others, it's apparently distracting to us? In any case, the thing is banned or limited "for our own good."

Then what happens?

Whoops, no ereader for you unless you can prove you need it for a disability reason and are willing to out yourself. Spend the money and the space on those paper books! Who cares that they're harder to hold up, or that the electronic version is searchable?

Whoops, no more movies! (You know, storytelling? Acting? It's on a screen, though, so we can't have that.)

Whoops, no more games on a screen! Never mind that some of them involve walking and most of them involve problem solving and that fun matters on its own.

Whoops, no AAC for you unless you have formal documentation of the fact that you need it and are willing to out yourself. Better go back to being silent in class, or maybe not going to school at all! It's distracting to have you here, after all. Or you could try this low-tech system? (Which, to be fair, is most of what I use. Doesn't mean it's OK to make me stick to the low-tech options in the situations where my high-tech, screen dependent options are better.)

Whoops, no more online classes. (Temple actually made this one an explicit exception, so, again, tiny bit of kudos for the nuance, but don't say screen time unless you actually mean screen time because words have meanings.) 

Whoops, no more friends who live far away! Pay attention to the jerk in front of you who thinks screens are the devil.

You only had to admit to taking issues with the video games, but now all this is gone, because you could point to something that many people will take an issue with and generalize it beyond any semblance of accuracy.

*I'm sure she's an expert in what works for her. She basically got pushed to "pass" for neurotypical, which is still what mainstream experts tend to think of as being the "optimal outcome" for autism but is often a recipe for burnout. Now she recommends stuff that makes it sound like she agrees that's the best thing. She also led to the popularization of the idea that "autistic people think in pictures." As an autistic aphantasiac (no minds eye), I'm well aware that's not consistently the case. So, no, I'm not a fan of Grandin.

Thursday, August 10, 2017

It's kinda funny

So, a few weeks ago I met with two folks from a company that's making a computer game or a video game related to autism and social skills. I agreed to meet with them for a couple reasons:
  • The one I'd met before, I met at a hack-a-thon like event (un-hack-a-thon?) that was autism focused and had many autistic participants, mostly teenagers, and which used Nick Walker's description of autism as a starting point. Starting from a neurodiversity paradigm description of autism is nice, and not something I see much of for technology and autism stuff.
  • The one I'd met also liked the "Autistic Party Giraffe" shirt I was wearing. I find that people's opinions on that shirt are somewhat useful information: folks who comment on liking it are generally able to handle the idea that Autistic identity is a thing without too much worldview conflict.
  • They clearly didn't quite know what "supporting autistic people in finding social methods that work for us" would really mean, but the couple ideas I'd thrown out at Chatter went over well. Things like, if we can get more done by not trying to pass for neurotypical, why the heck is passing for neurotypical considered an optimal outcome? (See Dani's "On Functioning and 'Functioning'," yet again.) 
So, I did the thing. It was exhausting. We met at a coffee place between my campus and the train station on a Friday morning, and we talked for about two hours. They said at the time that what I was saying made sense, and that it changed their perspectives, and now they needed to figure out how to navigate the tangled mess of doing something actually helpful with their game while also getting the needed funding to make the darn game.

One incident that sticks out for me was the demo video of the game. They brought a laptop, and there was a minute or two of gameplay video that I watched. When it first started, there was a big face and eyes right at me. I flinched. Unexpected face in my face! Then there were points where a player was supposed to recognize the emotion that this being was expressing. The emotions were clearly overacted, both in terms of facial expressions and tone of voice. This was supposed to be some sort of "easy" mode, I guess? Whatever. I could tell it was overacted. That didn't mean I could always tell what emotion was being overacted. (Yeah, I got some "wrong.") 

Judging by their reactions to my reactions (how meta theory of mind can we go here?), it seems I served as an object lesson there:
  • Identifying that an emotion is being expressed is not the same as identifying what that emotion is.
  • Managing OK in real-life social situations is apparently not the same as recognizing overacted emotions in artificial settings.
  • Some autistic people will absolutely flinch from unexpected eye contact. Ow.
It's a thing that happened. I was super tired after. 

Monday, July 31, 2017

Distraction or DDOS?

Heads up that this is about the current US government, including the POTUS. Meaning: Everything is a mess.

Every time that several bad things are happening at once, call them R through Z, I see comments like this:
  • Don't worry about X, it's just a distraction (from Y)!
  • Z isn't a real threat, it's just a distraction (from R).
  • They want you focused on S instead of all the other stuff, don't fall for it!
Here's the problem: all of R through Z are legitimately bad. Every single one of them. They might not affect you personally, but they are all bad. Some are foreign policy disasters. Some are complete failures of how our government is "supposed" to work, and not in ways that would help marginalized folks. (A massive change in how policing is handled could be great. Encouraging brutality in arrests is not the massive change that could be great. It's taking the status quo and making it even worse.) Some are fairly blatant attacks on one group or another. (Taking Medicaid apart will get disabled people killed or institutionalized. See also: why ADAPT has been protesting at pretty much all things healthcare.)

These aren't distractions. To borrow a term from the Internet we rely so heavily on, it's a dedicated denial of service attack (DDOS). The idea behind DDOS is that a person or group sends so many requests to a server at once that the server crashes and loses most or all of the requests, making whatever site it's supposed to host unusable. Think of all the bad things happening as requests - you want to do things about them, hopefully. Think of yourself as the server - you have a limited capacity to handle requests, or a limited capacity for issues to take action about. If you try to take action on all of them, you'll get overloaded and quite possibly handle none.

That's precisely the idea behind DDOS. Overwhelm the server (you, in this metaphor) and they can't do anything. For actual servers, there are a variety of ways to handle it but no perfect solutions, because a server that can't respond to requests for information isn't much of a server at all. For us, any one person clearly can't pay attention to every single issue. This isn't a call for you to focus on more things at a time. (That sounds like a bit of a contradiction, since to focus you generally need to narrow things down.) 

So: you can't focus on every single issue at once. You still need to focus on a few issues, or even just one. That's fine. The difference between understanding all the bad things happening that aren't your personal focus as distractions and understanding them as part of a DDOS attack is what happens when you encounter another person who is focused on a different set of a few issues. If those issues are distractions, their focus is a problem. If those issues are part of a DDOS attack, their focus is great. You want to know that other people are covering these other issues! Splitting up the issues between different groups of people so that everything gets covered even though you don't cover everything is the best way we have of responding if all the issues are real.

And what about things like foreign connections and the whole Russia mess that we know Trump doesn't like to have talked about? Noticing what news tends to come with increases in the DDOS onslaught is still useful. That's the news that they really want to make sure gets lost because we're too overwhelmed to deal with it.

Saturday, July 29, 2017

Language choices and history

Yeah, yeah, I know, I've talked about this before. Assuming I caught all my prior posts, this is the sixteenth time I've talked about language choices for autism, though this one isn't quite the same as the others. It’s coming as the result of a good discussion that helped me clarify thoughts I'd been having rather than the result of someone insisting my language choice is wrong because they were taught so.

So: I hate being called “differently abled.” It feels euphemistic to me, like we can’t admit to the fact that I’m disabled. I also hate being called a “person with autism.” Even being called “on the spectrum” rankles, and not just because I think the idea of autism as a spectrum gets used to reduce everything into a spectrum of “less autistic” to “more autistic” and also “higher functioning” to “lower functioning,” with these two incoherent concepts also being considered to be the same[i]. It’s also the way the term has been used. It’s a sort of (very recent) usage history that makes me extra wary of “on the spectrum.”

And history is the key to my current thoughts. Every way I can think of to identify myself as Autistic or as Queer has history. Usually as a slur, in the case of Queer identity - Queer itself is an example of this. “Autistic” as noun? It’s part of the dehumanizing nonsense that got person-first language started in the first place.

Person-first language, or “person with autism”? Yeah, it started in a good place, where people with disabilities, mostly intellectual or developmental disabilities, decided that they wanted that language to emphasize their personhood. Professionals were (frankly often still are) forgetting that we’re people. Said professionals picked up the language. They didn’t pick up the intent: remember that we’re people. At least in the case of autism, and probably for other disabilities, they picked up a completely different idea: that the autism or other disability is somehow separable from the person, and there’s a “normal” person underneath. That’s a history I want nothing to do with – don’t call me a person with autism. Also, if you need a language construction to remember that I’m human I don’t want you anywhere near me. I don’t. I’m not sorry.

“Differently abled”? Technically true, I guess. It’s another one where there may have been good intentions originally – recognizing that we have abilities that typical people may not have access to, and that this can be a direct result of our disabilities. (Or, or different abilities?) It gets used as as a way to ignore the realities of disability, of access barriers, and sometimes of the reality that there are things we just can’t do.

“On the spectrum”? It’s been touted as a compromise solution to this language debate. Mostly by professionals who think “person on the spectrum” is less euphemistic than “person with autism” and by people “on the spectrum” who are willing to be tokenized, as far as I can tell. It’s not only unclear (there are many spectrums), but also still a person-first construction. That’s not a compromise! But folks insist it is one.

“Aspergers” or any variation thereof? 1) False. Literally does not apply. 2) When it was a diagnosis in the DSM, it was frequently applied to mean “high functioning” or to avoid scaring people with the “autism” label. It ties in with aspie supremacy, and that can kill. No way. That’s not just a history I don’t like. That’s a present I find morally reprehensible.

Now, I need to find a way to talk about who I am, what my experiences are as an Autistic person. I need to use language that will be understood. Making up new words is a valid option. It’s where new language comes from. I use plenty of words that were created in my community. But take a look at the history behind some of the words I said I have issues with. Some of them started in my community, or communities like mine. Then they got picked up by folks who want to pretend that the difference isn't quite real, isn't important, or can somehow be separated from the person (maybe needs to be in order for the person to count as a Real Person.) Even language that could be good has this happen. Then there’s the reclaimed slurs. (A lot of the language around Queerness is of the reclaimed slur type.) Just about all the language has problems of this sort. At this point, reasonable people can reach different preferences based on which bad history, which bad associations, which ones are we going to tolerate or reclaim for the sake of being understood?

Now, I am of the "queer as in fuck you" school of thought for most of my divergences[ii]. Disability is a word that scares people. “Good intentions” behind folks saying they don't see me as autistic, or as disabled are an indicator of how much disability is seen as a Bad Thing. Making people face the scary concept is actually an argument for using capitalized, identity first Disabled and Autistic in my case. Folks can sit with that particular discomfort, and if they tell me they don't see me that way or I shouldn't call myself that, they're getting asked 1) why they think their idea of me trumps my own, and 2) why they think they know better than I do what I should be called. If my identity is so uncomfortable for them that this is taken as attacking, we’ve got a big problem.

[i] That would totally be enough for me to hate being called “on the spectrum,” though.
[ii] This includes my actually being Queer, just to be clear.

Wednesday, June 21, 2017

Alyssa Reads: Critical Studies of the Sexed Brain -- Communication thoughts

I continue my thoughts from reading Critical Studies of the Sexed Brain. Because I had more and then forgot to put them up here. Go me.  Here's the citation again if you want it:

Kraus, C. (2012). Critical studies of the sexed brain: A critique of what and for whom?. Neuroethics,5(3), pp. 247-259.doi:10.1007/s12152-011-9107-7  

And now the quote that got me thinking:

Critical neuroscientists frame the question of a science gap between neuro- and social scientists, experts and the public, just as couple's guides conceive of the gender gap in terms of unawareness, misunderstanding, or ignorance, promoting the idea that all matters can be settled through enhanced communication and better knowledge of each other's distinctive language, culture, needs or concerns.”

This needs more attention paid to it. Here is a big issue: there is a power imbalance. Patriarchy is a word for the imbalance in the couple's guide, and it would relate to the sciences one too since hard sciences tend to be thought of as men's fields while social sciences are thought of more as women's fields. (Accuracy of this thinking is another issue, but STEM in general runs man-heavy.)

That contributes to the rhetorical positioning of the fields, where neuroscientific “facts” can't be questioned by social sciences, even if questioning the facts isn't exactly what's going on. Sometimes it's questioning the causes and interpretation of the reported result rather than questioning whether or not the result was correct, or reproducible. Though the fMRI study of a dead fish is relevant, and so is the fMRI of the same person daily for about a year – fMRI is not infallible, no more than any scientific procedure is, and pretending it is will get us into trouble.

The author then asks about “lay expertise” from patients, relatives, and activists. Since I'm studying neuroscience but came from the Neurodiversity Movement before I got into neuroscience, I wonder where that puts me. As a neuroscience student, I'm one of the science people. As an Autistic person, I'm somewhat a patient. (Not much of one, haven't been in therapy related to autistic traits for a while, but when I write as an Autistic person, I go in that category.) And there is definitely a power difference between the roles. There has to be, for Theory of Mind to have been interpreted to mean autistic people can't understand our own experiences. Not everyone making use of the word thinks that, but it's an interpretation I've seen way too much of.

The author then points to this framework as “preventative politics,” where it keeps the peace by avoiding/assuaging conflict in the name of interdisciplinarity. She argues this could prevent good science that would come from controversy. I'd agree, but also say that it can involve silencing of ideas that aren't status quo as part of the peacekeeping.

Another issue with the focus on communication is that it only works if everyone is acting in good faith. It's the same problem with Nonviolent Communication and similar: if everyone is acting in good faith, it works fine. If anyone involved is actually seeking to maintain control or to do harm, consciously or not, it's not going to work. If one person's goals actively exclude the other person's goals, better communication can lead to figuring this out, but not to solving the problem. Seeking to expand the domain of one's own field without worrying too much about the domain of anyone else's field could lead to a similar failure in interdisciplinary communication ideas.

Tuesday, May 30, 2017

Let's talk about fidget spinners and patterns.

Fidget spinners are a fad. Thinkpieces about fidget spinners, therefore, are also a fad. That's how it works, right? On one side, there's people who are arguing that these are toys (true), that they are a fad (true), that they can distract some people (true), that there is not research showing improved focus from their use (true), and that they are not an accessibility issue (false). On another side, there's people arguing that they are a focus tool for some autistic people and/or people with AD(H)D (true), that the lack of evidence is due to a lack of research and not a statement of inefficacy to use against individuals who find them useful (true), that this can be an accessibility issue (true), and that their fad nature among neurotypical students is bad (false) because it is getting the toys banned (mixed truth value). I've also seen more nuanced views, generally from disabled people, but those seem to be the two main camps.

I want to point out a pattern in how accessibility discussions go, especially in educational contexts.
  1. A disabled person needs something for access reasons.
  2. Abled people call the thing distracting, because our existence in public is apparently distracting.
  3. The thing is either banned entirely or permitted only for people with the paperwork to prove they need it for disability reasons.
  4. Disabled people who need the thing either don't have access to the thing or must out themselves as disabled in order to gain access. If outing oneself is required, the thing is heavily stigmatized.
  5. Disabled people who have an actual access conflict with the thing are erased entirely, which makes conversations about possible solutions to the access conflict impossible. One set of needs or the other will "win." Any disabled people who need to avoid the thing are lumped in with the people who want to ban the thing for ableist reasons and therefore vilified. Which set of needs "wins" here varies, but it usually has some relationship to hierarchy of disability stuff and having one set "win" while the other "loses" is a bad solution regardless.
That's not just a fidget spinner thing, but it does apply here. With fidget spinners, autistic people and folks with ADHD (I'd love to know of a reasonably recognized way of talking about this neurotype without the second D/in a neurodiversity paradigm way, btw) end up in both the "need the thing" and the "need to avoid the thing" groups. I assume some other neurotypes are similarly split as well - I just don't have the familiarity to assert so. With visual alerts on fire alarms, D/deaf people need the thing. Since the visual is a strobe, a lot of neurodivergent people, especially people with photosensitive epilepsy, need to avoid the thing. With service animals, the folks who use them need the thing. People with allergies need to avoid the thing, and not everyone with an allergy can safely share a space with a service animal, even if they are treating their allergies. Conflicting access needs exist, and this pattern prevents us from finding ways to deal with the conflicts. Instead, one access need gets lumped in with abled people who don't like the thing because it's associated with disability and therefore presumed not to be a real need.

Now for fidgets: some people need something to do with their hands while listening if they're going to retain anything. I am in this group, by the way. In high school, I knit, I sewed, and I made chainmail - armor, not spam. I've also tried drawing, which takes care of the "need to do something in order to sit" issue but takes enough attention that I'm no longer following the conversation, so that doesn't work for me in class. Writing hurts quickly enough that while taking notes has sometimes been possible at university, there was no way it was going to be the answer for the duration of a school day in middle or high school. (I, specifically, should not have a laptop in class. If I'm going to need notes it's the least bad option, but least bad does not mean good.) So I did assorted arts and crafts that were fairly repetitive and totally unrelated to class. The biology teacher who told us on day one that he had ADHD was both the most understanding teacher about my need to fidget somehow and the teacher most at risk of being distracted by my making armor in class.

That last paragraph is the "no, really, I need to fidget." It's also the "there are several fidget options that work for me." Most, but not all, of the standard fidget toys will meet my needs, as I discovered because they are also a fad and I got some awesome fidget toys. This is important, when access conflicts come into play - if there are several options that meet the access need of the first disabled person, it's easier to find one option that everyone is OK with. When there are several options that work, requesting "not option A in situation W" is not an access issue, because options B through H are still fine. If we're going to come up with reasons that each of B through H are also not fine, individually, then we're going to have a problem.

The fidget toy fad is making options D through H cheaper and cooler. When fidgets are marketed as assistive technology, they are super expensive. Considering that disabled people tend not to have a lot of money, that's an access issue, so the fad is making a set of possible solutions more accessible. That's cool. It's also leading to a sufficient presence for teachers to make explicit policies about the toys (as opposed to banning them person by person), and for a flat ban to seem like a good idea to teachers who are seeing kids appear distracted by them. (My bet is that the neurotypical students who appear distracted actually are. I expect the autistic and ADHD students who appear distracted are a mix of actually distracted because they are just as distractable as any other student and only appearing to be distracted because of ableist ideas about what paying attention looks like. Remember, I'd fail special needs kindergarten as a twenty-four year old PhD student.) The explicit banning for everyone is ... not so good. Mostly because the other options are usually also disallowed or heavily stigmatized, and then we may well be left with no good options.

And let's not pretend handing everyone a fidget spinner, or any other fidget, is going to magically "solve ADHD" or whatever. I think some of the camp that's firmly against the toys is reaching that position for similar reasons to haters of weighted vests - we hand it over and the person is still autistic, or still ADHD. A tool that a person uses to cope in a less than accessible environment doesn't make them stop being disabled by the environment. Plus a fidget spinner isn't going to help everyone. Some people really will be distracted if they have something to play with, and some of those people really will be neurodivergent. Conflicting access needs, again, are a thing. If one person needs a fidget, and another needs not to be next to someone with an obvious fidget, those two people probably shouldn't sit next to each other. Giving people fidgets that they can use while the toy remains in their pocket is also a possibility in some cases. We can have conversations about access conflicts, if we admit that both sets of needs exist. (We also need to admit that some subset of the people making arguments about distraction are doing the bad faith argument where everything disabled people need is a distraction because, essentially, our presence in public is a distraction.)

[Let's also insert a plug for my Patreon. I write. I have a Patreon.]

Saturday, May 20, 2017

"Your taste buds will change"

CN for food and vomit.

That's one of those sentences I read every so often, which is technically true, but which doesn't actually lead to the conclusions I see it used to support. Taste buds really do change with age! This is a thing that happens, and it's part of why there are certain foods kids tend not to like but which adults are more able to tolerate. (I think most alcoholic drinks go in this category, where kids tend not to like the taste anyways?)

As true as it is that tastes change, there's some things my brain has decided I need to explain now about why this doesn't mean getting into a power play with someone over what they eat and how they're "picky"  is a good idea.

  1.  You probably don't know what the result of "pushing the issue" is going to be. I don't just mean long term results. I mean short term, in the minutes to hours right after forcing the (in)edible object down. Obviously, you don't expect it to be a big deal, or else you wouldn't be trying to force a "picky" eater to eat something they can't eat. How wrong are you ready to be? TMI alert, last time I made myself drink something that was an issue, it came back up. (If it hadn't been something I was medically supposed to have, I wouldn't have tried. It still didn't work, because it didn't stay down.)
  2. The fact that someone's tastes may change and they may be able to eat a food later doesn't mean they can tolerate it now. The change hasn't happened yet. So even if you're correct about the nature of the upcoming change, you're still trying to make someone eat something they don't currently tolerate. See point 1.
    1. Also, even if you were going to be correct, you can cause that not to happen by creating an association between being forced to eat the food and whatever sensory issue it's hitting. That can create a new issue with the food in question, besides taste...
  3.  The issue may not be the taste. I can't drink anything carbonated. You might think that's a rather broad category for a taste issue. You'd be correct. It's not a taste issue. It's best described as a texture issue, and you've said nothing about texture sensitivities changing. In fact, most of the foods I can't deal with are texture issues, not taste ones.
  4. The changes in taste may not be the ones you expected or hoped for. Some foods that were issues before can become non-issues, but it can go the other way too. As a very small human, I could eat mushrooms. As an adult human, I can not eat mushrooms. (It's also the texture, not the taste.) Chocolate pudding was a "safe" food for me as a kid. It's about 50-50 on my being able to eat it now. (Texture again. Also, partially related to times when I didn't get the choice about yogurt, which has never been an OK texture and which is close enough to pudding that making yogurt even worse made pudding a problem. See point 2.1.) I ... actually can't think of any foods I can have now that I couldn't deal with as a kid. 
Tastes do change as we get older. That doesn't mean they'll change the way you want them to, or that a possible change that hasn't happened yet justifies acting as if it's already happened. 

Thursday, May 18, 2017

Alyssa Reads Critical Studies of the Sexed Brain

This is another one I read for neuroethics. I was considering using this article for my presentation on a neuroethics related topics, but that didn't happen because someone else split off my too-large group and it wasn't too big anymore. We actually wound up talking about a medication used to treat addiction ... that can itself be addictive. Fun times. So, here's some of my thoughts from reading Critical studies of the sexed brain. 

“They suggest that we work and talk across disciplines as if neuroscientists were from Mars and social scientists were from Venus, assigning the latter to the traditional feminine role of assuaging conflict” (247). sigh I am not surprised that some scientists think of social sciences that way.

Brain plasticity+ identity formation in intersex people, brains vs. genitals. That's going to be interesting. By which I mean, I have concerns. I have friends who are intersex. I know people who do intersex activism. And I know intersex people who concluded that intersex and/or nonbinary is their gender identity rather than picking one of the two binary genders. Hope the author isn't assuming a gender identity must be one of man/woman. Heck, mine isn't that and as far as I know, I'm not intersex.

Oi at calling autism a disease. It is a neurodevelopmental disability [or a neurotype, that's a good word and also let's remember what I'm saying when I say disability - the social model of disability is a thing.] Also I know the author found neurodiversity stuff because the article comes up when I search the journal for neurodiversity, what the heck? I don't expect to hear it called a neurotype in anything done by neurotypical(-passing) academics but really? Disease?

Ok, gender in the brain as a result of plasticity, that's going to be interesting – “reflect gendered behavior as learned and incorporated in a social context” is a thing, but please, please don't let this turn into “male socialization” for trans women or “female socialization” for trans men, or either of the above for nonbinary folks. The socialization of “consistently mistaken for X while actually Y” is not the same as the socialization of “X.” Ok, individual differences are a thing. That's good. “Plasticity arguments are extremely interesting as they wage war against both biological and social determinism, reductionism, essentialism, and other -isms.” Phew that's not the socialization argument I was worried about, I don't think.

Does she mean “cishet” by “normal people”? (Cishet=cisgender, heterosexual.) I appreciate the quotation marks around “normal people” but there probably is another word for what she means and using it would be nice.

Now we have one of my rage buttons. All caps time!

Intersex activist history! I knew about unwanted surgery, gender role training, and folks wanting their own intersex bodies back. I also know someone who was put on unwanted hormones. What are the results of Diamond getting so lauded while speaking in terms of brain sex, though? It's still the language coming from the people who try to enforce the man/woman dichotomy. What are the results of using the "sexed brain" discourse while not necessarily fitting in the binary? 

1 Walker, N. (September 27, 2014). Neurodiversity: Some basic terms and definitions. Neurocosmopolitanism: Nick Walker's notes on neurodiversity, autism, and cognitive liberty. [blog post] Retrieved from is a good explanation of the neurodiversity related vocabulary I tend to use when thinking about neuro stuff.

Thursday, May 11, 2017

Alyssa Reads Memory Blunting: Ethical Analysis- suffering and authenticity

I read "Memory Blunting: Ethical Analysis" by the President's Council on Bioethics, excerpted from Beyond Therapy: Biotechnology and the Pursuit of Happiness (2003) and appearing in Neuroethics: An Introduction with Readings, edited by Martha J. Farah. I did so because I am taking a neuroethics class and we're supposed to show that we're thinking about neuroethics stuff at least a bit outside class. Also because I'm super-interested in how neuro-stuff (especially neurodivergence but really all things neuro-) is represented in fiction (especially young adult speculative fiction.) I'm pretty much chucking my notes (drawn parallels, expressions of annoyance, and the occasional "OK that's legitimate") on my blog because as important as a lab notebook is, I like notes that are typed and searchable. I started with some connections to Allegiant, then some thoughts on collective effects of blunting trauma, and then cognitive liberty. Now here's suffering and authenticity.

The concerns about what we might do to others minds if it were an issue of what person X does/chooses for person X, not what we are choosing for others. The concern seems to be about changing someone's true self, so suffering and authenticity come in again, just like cognitive liberty. These two seem frequently connected to me. If we recognize that people get to define their own "true selves", we don't get to moralize over which experiences are real and true anymore, which kind of kills the "not their true self" argument. Which is an argument I'm really not a fan of, especially considering which experiences it tends to be applied to.

This quote ... gives me the noble suffering/virtuous suffering sort of feeling, where whatever positive you might (not will, might) drag from the hell you go through means you shouldn't try to avoid that hell or save others from going through it.
Or will he succeed, over time, in 'redeeming' those painful memories by actively integrating them into the narrative of his life. By 'rewriting' memories pharmacologically, we might succeed in easing real suffering at the risk of falsifying our perceptions of the world and undermining our true identity. (90)
The version of a person that went through more bad things isn't automatically more real. The version of a person that's suicidal from trauma isn't automatically more real than the version of a person that takes medication to not be suicidal. Our choices define us, not just what we've been through, and using chemicals to get the parts of our histories we never chose to back the heck off? That's not less real. Suffering isn't the only way to be real. Enough of the noble suffering narrative. Enough.

Now to bring back a quote that I also talked about with cognitive autonomy:
And yet, there may be a great cost to acting compassionately for those who suffer bad memories, if we do so by compromising the truthfulness of how they remember. We risk having them live falsely in order to cope, surviving by whatever means possible. (92)
  (Survival is resistance etc)

And the concerns about what happens if we take out everything difficult? Those take a huge slippery slope argument, and not the kind where we've seen from experience that most people stop early or don't stop at all (destructive obedience is one of those.) Trauma is not the same thing as everything difficult in a person's life. Having to spend a lot of time and effort on reading and writing in order to become a good writer is not the same as witnessing a murder or being mugged or being a victim of abuse. One of these things is a choice: we're not under any obligation to become good writers. The other's aren't choices. They're things that happen to us. How we deal with the results is at least partially a choice. (Not entirely. Especially when, due to technological or social constraints, dulling the pain while working through it isn't an option.) There is plenty of opportunity for hard work and achievement without forcing others to keep horrors in their heads for the sake of ill-defined authenticity.

Tuesday, May 9, 2017

Alyssa Reads Memory Blunting: Ethical Analysis- cognitive liberty

 I read "Memory Blunting: Ethical Analysis" by the President's Council on Bioethics, excerpted from Beyond Therapy: Biotechnology and the Pursuit of Happiness (2003) and appearing in Neuroethics: An Introduction with Readings, edited by Martha J. Farah. I did so because I am taking a neuroethics class and we're supposed to show that we're thinking about neuroethics stuff at least a bit outside class. Also because I'm super-interested in how neuro-stuff (especially neurodivergence but really all things neuro-) is represented in fiction (especially young adult speculative fiction.) I'm pretty much chucking my notes (drawn parallels, expressions of annoyance, and the occasional "OK that's legitimate") on my blog because as important as a lab notebook is, I like notes that are typed and searchable. I started with some connections to Allegiant, then some thoughts on collective effects of blunting trauma. Now here's cognitive liberty.

The concerns about what we might do to others minds if it were an issue of what person X does/chooses for person X, not what we are choosing for others. Cognitive liberty. We don't seem to have a coherent definition of the self, and autonomy is complicated, but there is definitely a thing where a person either is or is not making the decisions about interventions taken (or not taken) on their own minds. Also on how folks define their own "true selves." What about who you are is important to you? Not what's important to me about who you are. Of course, that would stop us from moralizing over what experiences other people have are real and true vs. somehow fake. Changing one's own cognition by one's own choice isn't as acceptable as I think it should be. 
And yet, there may be a great cost to acting compassionately for those who suffer bad memories, if we do so by compromising the truthfulness of how they remember. We risk having them live falsely in order to cope, surviving by whatever means possible. (92)
Again, we do to them. Not, we offer them the option. Do we think we know better than them what's right for them? That way lies all sorts of abuse "for their own good." And ... do we really think everyone would choose to dull the pain of a memory or to forget it (remember also that those two things are not the same.) Because I don't think that. I think lots of people would, but not everyone. Despite (because of?) my arguments about cognitive autonomy leaning towards letting people choose to blunt the trauma,  I want the right to remember in my relatively unchanged way. It's just that the arguments run towards why everyone needs to be doing it that way, and I don't believe everyone needs to be remembering that way. I think enough people would choose to remember that we'd get whatever collective benefits the memory would provide, even if we let people choose to dull their pain. Not that I think the supposed benefits are nearly as strong as seems to be argued. Intentional ignorance is already a thing.

Thursday, May 4, 2017

Alyssa Reads Memory Blunting: Ethical Analysis- collective effects

I read "Memory Blunting: Ethical Analysis" by the President's Council on Bioethics, excerpted from Beyond Therapy: Biotechnology and the Pursuit of Happiness (2003) and appearing in Neuroethics: An Introduction with Readings, edited by Martha J. Farah. I did so because I am taking a neuroethics class and we're supposed to show that we're thinking about neuroethics stuff at least a bit outside class. Also because I'm super-interested in how neuro-stuff (especially neurodivergence but really all things neuro-) is represented in fiction (especially young adult speculative fiction.) I'm pretty much chucking my notes (drawn parallels, expressions of annoyance, and the occasional "OK that's legitimate") on my blog because as important as a lab notebook is, I like notes that are typed and searchable. I started with some connections to Allegiant. Now here's thoughts about the collective effects of forgetting, as worried about by the authors (and as I tend to think we deal with even without dulling memories pharmacologically.)

I have a concern about this supposed legal argument against using beta blockers or similar medications to reduce the emotional impact or trauma from publicly important events. (The given example was a terrorist attack. I can ... kind of tell this was written not too long after 9/11.)  The idea is that it's important to have some witnesses remember the event accurately. There's a problem: I remember from my introductory neurobiology class that when a memory is super emotional, we feel quite certain of our recollection ... but that we can still be completely wrong in our memory of what happened. Ask people where they were on 9/11, or when the space shuttle exploded, and some will tell you they were listening to or watching other events that didn't happen on those days. Sometimes didn't even happen that time of year. But we are confidently wrong! So as useful as accurate recollection would be for legal purposes, maintaining the traumatic impact on the witnesses doesn't make accurate recall happen anyways. Also, eyewitness testimony is notoriously unreliable to begin with. This is a bad argument because the thing we're claiming to want to preserve already doesn't exist.

On that note, I wish the authors had said something more about the social and personal effects of blunting our collective traumas. I'm not entirely convinced that leg of the argument is going to hold either. After all, I'm a Jewish (and Queer, and Disabled) descendant of Holocaust survivors, and I know how we're never supposed to forget. I'd be a lot more inclined to buy into the value of collectively remembering and the consequences of forgetting if we'd stopped having genocide or deciding that certain religions are inherently more dangerous or lesser. But we didn't. These things all still happen. The things we're claiming to want to prevent already happen with our supposed preventative in place, and that means I don't trust the argument.

The murder witness example actually does concern me. "Yes, I was there. But it wasn't so terrible." (91). We don't want murder to be thought of as not so terrible. I know we don't want that because sometimes it is already considered not so terrible. See also: "mercy" killings of disabled people by the folks who are supposed to take care of them. It already just depends on the choice of victim, and that's terrifying. I don't want the idea of murder as not so terrible spreading any further than it has. I want it gone. I want all the murders being recognized as being as bad as they are.

I also have issues with the juxtaposition (and sometimes what seems like conflation) of giving a victim relief and medicating away (or relieving, I suppose I should use the same language for each) the guilt of perpetrators. Those are not morally equivalent. Victims and attackers or abusers are not the same. When we're talking about a mutual conflict, as in the case of war (the most talked about cause of PTSD, but far from the only one), there may not be a clear aggressor or victim. There also may be. It depends on what's going on, really (and remember how often the military is painted as the only way out for people in poverty, at the same time we remember the atrocities soldiers often commit.) Still, when we're talking about accidents and survivors of terrorist attacks, there's clear innocents. (Not "perfect victims" in the sense that they never did anything else even slightly wrong, but innocent in the sense that they didn't choose what happened to cause the trauma.)