What I'm going to term strong free will basically says holds on to this principle: if an agent is morally responsible for A-ing (performing or willing some action), then s/he must have been able to refrain from A-ing (i.e., possibly not A-ed given the exact same conditions, including the agent's set of beliefs, attitudes, and desires.) To me, our reading from Notes from the Underground seems in line with this construction of strong free will, as the underground man numerous times says something along the lines of making decisions is not the same as getting the square root of a number.
IMO, there is a fairly strong thought experiment counterexample to the claim that in order for someone to be responsible for an action that necessarily have to have strong free will at the time. I will say in advance that in no way does this completely obliterate our the idea that someone needs to have been able to have done otherwise (in the robust sense mentioned above) in order to be morally responsible for what they've done, only that it illustrates that our free will does not always reliably ground being responsible. In other words, free will is not a necessary condition for being morally responsible.
The example I have in mind originally comes from the philosopher Harry Frankfurt, but I'm going to use philosopher John Martin Fischer’s version, because it's the clearest and includes a prior sign to establish how Black can know what Jones intends to do.
"Suppose Jones is in a voting booth, deliberating about whether to vote for Gore or Bush. After reflection, he chooses to vote for Gore…unbeknownst to him, Black, a liberal neurosurgeon…has implanted a device in Jones’ brain which monitors Jones’ brain activity. If Jones is about to choose to vote for Gore, Black’s device simply keeps monitoring and does not intervene in Jones’ process in any way. If, however, Jones to about to vote [for anyone other than Gore, say Bush, which Jones displays through an involuntary sign at T1]…the device intervenes and…electronically stimulates… Jones’ brain in a manner sufficient to produce a choice to vote for Gore at T2 (and subsequently cast a vote for Gore at T3)." (John Fischer, “Frankfurt-style Examples, Responsibility and Semi-compatibilism,” Free Will. Ed. Robert Kane. Malden, MA: Blackwell Publishers, 2002, 93.)
Given that Black’s prior sign device ends up playing no role in Jones’ deliberations and final act of voting, since Jones opted to vote for Gore on his own without any actual interference from Black’s device, it seems that Jones votes freely and is responsible for voting for Gore, despite the fact that the presence of Black’s device eliminates the any alternative possibilities with regard to Jones’ choice and action about whom to vote for.
What is significant here is that Black does not intervene, b/c Jones happens to decide to vote his way, but Black's presence closes off alternate possibilities.
There is one obvious criticism here. Despite the initial appearance of no alternate possibilities, Jones in fact does have a limited alternate possibility, what Fischer calls a ‘flicker of freedom.’ I.e. Jones can still of exhibit a different neurological prior sign at T1, say a different neurologically pattern in his brain, than the one he did in Fischer and Frankfurt’s example. In other words, no matter what, it is always possible for Jones to display a different neurological sign right before Black intervenes.
Really though? Proponents of strong free will, as I presented it, are committed to saying that Jones is responsible for his decision in virtue of his ability to produce a different neurological firing pattern in his brain. This seems to stretch the credibility of our intuition that people are responsible for voluntarily acting on their own to the breaking point, because prima facie displaying a different, unconscious neural state does not look like a free, voluntary choice at all, especially if one stipulates specifies a Stumpian qualification (I called it this, because some philosopher named Elizabeth Stump I believe came up with this) where “if the firing of the whole neural sequence correlated with a mental act is not completed, the result is not…an incomplete mental act (say the beginning of a choice or decision)…[but] no mental act at all” (Fischer 103). On this construction, what kind of significant voluntary control could can agent possibly have over a neurological state, where if the neuron firing sequence is not completed a thought/mental event does not even occur?
The other main objection I see is claiming that it is logically impossible to have a full proof prior sign for Black. But, my aforementioned Stumpian qualification to the Frankfurt example circumvents this issue, as it allows for indeterminism at T0 (i.e. Jones’ complete set of beliefs and desires before going to the voting booth not being sufficient to vote one way or the other yet) or at T1 right up until Jones ends his deliberative process and chooses who he will vote for at T2. Yes, determinism has to ‘lock in’ after a certain point once the neural pathways start firing in a way that generates a decision of who to vote for, but that does not mean we cannot characterize the general state of affairs involved as indeterministic. It is a mistake to assume, because part of Jones’ deliberative process is determined, that the whole T0-T3 period has to be called deterministic. All such a criticism seems to indicate to me is that some versions of indeterminism rule out the tenability of Frankfurt-style examples, not all versions of indeterminism.
What this example shows is that freedom and morally responsibility don't necessarily come to the same thing. These types of examples though don't show that the strong free will version of 'could have done otherwise' is never correct, only that in certain circumstances (and somewhat outlandish ones at that) strong free will does not reliably ground moral responsibility.
I apologize if this post was overly technical and jargony. I had to write a paper basically on this very subject last semester in Metaphysics, so I'm drawing very heavily from that. I'll be happy to try and explain any unclear or vague terms I used.