Let’s say you’re on a business trip, and you get lonely so you decide to hire a prostitute. But you like the girls you know back home, so you decide to place a call to your pimp back home, offering to pay for the whore’s transport to wherever you are in addition to your usual fee. Does it bend the law? Maybe. Does it mean you’re not fit for your job, even if you’re, say, a project manager and expected to lead? Probably not. Does it make you a horribly immoral person? Well, not that much more immoral than hiring a prostitute in the first place, which if you believe some people, is not much different from marijuana. Should you be run out of your job and disgraced for life regardless of how good a job you did before? If you used company funds, maybe; but if you paid with your own money it’s not even the company’s business.
But if you’re the governor of New York? Apparently it’s a different story.
I’ve been reading about the Elliot Spitzer scandal and beyond the hypocrital irony, I’m seeing a distinct disconnect. I’m not seeing how “patronized a prostitution ring” exactly equates to “is a corrupt politician” or, considering just how popular it really is, “is a reprehensible person”. If he used campaign or state funds to pay for his “night of fun”, or if he lied under oath about it or actively tried to obstruct the investigation instead of semi-fessing up, I could see the scandal, but if it’s about doing something that any red-blooded American of the same gender would do (well, most)?
Doesn’t this only show that Spitzer is (gasp!) actually human and not a perfect little saint? Do we actually expect our politicians to be the latter? Considering how many corrupt, truly reprehensible politicians there are out there, shouldn’t we be focusing on more important things for us to get upset about our politicians? JFK was anything but a saint, after all.
Really, aren’t there more important things for the media to talk about? I would think the damage the Bush administration has done over the past seven years is far more important than a governor’s sexual escapades. Bill Clinton, after all, had sex outside marriage while in an executive office, and I would say it didn’t affect his ability to be president too negatively, would you?
(No, this isn’t what I was hinting at earlier.)
Assuming you live in the United States, you’re probably used to races being called virtually the instant the polls close. Networks, not wanting to deal with – heaven forbid! – uncertainty (or losing the scoop to a rival network), use exit polls to “cheat” and declare the winner of a race certain without having any actual results to go by. No doubt you may have been confused in 2000 when Gore was called as winning Florida when Bush was consistently leading.
I believe I have a better system to call results based on one thing and one thing only: the results themselves. But it appears complicated at first glance because, as it’s evolved over the years, it involves four different methods of calling a race – four different levels of certainty.
Projection was developed originally as a way for me to avoid having to wait for validation of a foregone conclusion. Used when one candidate leads another by a statistically significant margin consistently, it’s most akin to the networks’ approach but “projection” isn’t really the right word. It’s really more of an expectation. I think as of late I’ve been drifting towards using this as a reflection of what the networks call or aping the AP’s calls.
Auto projection and the other automated methods assume all precincts have an equal number of voters, which isn’t necessarily true but it’s good enough. If Candidate A leads Candidate B by A% to B% with P% of the precincts reporting, then with all percentages expressed as fractions of 1, if A%>B%+(1-P%), the race is autoprojected to A. In other words, A must lead B by at least the percentage of precincts not reporting. This one’s in here for its simplicity and the ability to provide some satisfaction before the really significant one.
Confirmation is a result of the implications of the above assumption, which indicates that A has really won A%*P% of all the votes in play. (Similarly, B has won B%*P%.) Thus, this test involves multiplying A% and B% by P%, and repeating the auto projection test: A%*P%>B%*P%+(1-P%). If A passes this test, and the assumption above is true, it is mathematically impossible for B to pass A. B has been “eliminated” and, if B was second, A is no-doubt-about-it first. A network using this system might still say A “has been auto projected” to win, but once A crosses that confirmation threshhold, you don’t say A “has been confirmed” – you say A has “won”, no doubt about it.
Majority confirmation is one I’m considering dropping. In a two-man race it’s the same as regular confirmation. In large or tightly contested races it might not occur, as I’ve found out in the early presidential primaries. In all races it’s meaningless because the confirmation threshhold has already sealed A’s victory, unless having a majority is meaningful in some way. It basically puts A up for confirmation but against the 50% threshhold instead of B’s reporting-adjusted maximum: A%*P%>.5.
I profess to having something of an interest in politics, and I’m starting to follow the coming 2008 election with some interest. From here until November 4, I’ll be counting down every second here on Da Blog.
More such countdowns are forthcoming.
UPDATE: Switched to a different code, which appears to be working. But it doesn’t do anything more than a year in the future, and only allows the target to be chosen in hour increments.
As if a mass of states moving their primaries to Super-Duper Tuesday wasn’t enough, now Michigan wants to move its primary to January.
I’m going to make a guarantee here. By 2012, someone, either the parties or the government, will mandate that all primaries and caucuses must occur on the same day in all 50 states. New Hampshire and Iowa will just have to deal.
I’m not really a political junkie, but I do pay a lot of attention when election season rolls around. We’re just two years away from a unique election cycle, when neither a sitting president nor vice-president will be running for president.
As with most of the things I’m intensely interested in, I have a project I’m working on for it. In this case, it’s a ranking of the potential nominees from each party based on their chances of winning the nomination. Positions on the issues play no role in this; I base it entirely on polls and fundraising.
And right now, both are failing me. The FEC’s web site doesn’t yet contain any financial data for the current election cycle. As for polling, it works very well near the top but is worthless at the very bottom.
Consider this ABC-Washington Post poll. Note that there are six Republican candidates that got 1% in the poll and three that got 0%. The sample size of Republicans is 344, so 1.72 would be the number of respondents that represents .5% of the poll, anything below which shows up here as 0%. How am I supposed to separate those three at 0% when they either got 0 or 1 person saying their name?
It gets worse. The threshhold for 1.5% would be 5.16 respondents. Therefore all those people at 1% got 2, 3, 4, or 5 respondents saying their name. I am left to assume that the poll results are sub-sorted by how many respondents said a name, but ties still exist, and worse, if they’re in alphabetical order, I don’t know which comparisons of two back-to-back candidates represent ties and which represent a different number of respondents! And it all reflects the luck of the draw! I’m ignoring margin of error in my rankings but even I can’t ignore this!
This poll was conducted on a national sample of 1000 adults. That’s how many should be polled from each party. The poll’s total sample should be closer to 2500.
Then I got an idea. Perhaps we could combine the results from several polls, thus adding to the sample size and lowering the margin of error. The chances of two polls contacting the same person are astronomical, so it’s like taking one big poll. For example, there are three similar polls from this month in the same field: the Gallup Poll has 412 Republican respondents, and the Zogby Poll has 301 Republican respondents. All have, ultimately, the same problem, but when you add their sample size together you get 344+412+301=1057 respondents in the sample. That means 5.285 respondents represent .5%, enough for some separation, weak though it may seem; meanwhile, 15.855 respondents represent 1.5%, enough to rest easy that six candidates would have at least some separation.
I would love to be the person to create this “superpoll”, which would be important far beyond this context, but unfortunately, the sort of raw data of pure numbers of respondents is treated as fairly proprietary. Either I have to get into a subscription service to get them (always for a fee) or they don’t offer it at all. Why, I’m not sure. I could guesstimate it by weighting the results of the various polls, but it’s an inexact science to say the least.
Which leaves nothing for me to work with, at least in the back of the field, but the analysis of others. I know it’s early and a lot can change, but predicting the future isn’t my priority so much as determining what’s going on right now, despite my emphasis on fundraising. Judging by polls from 2004, the sample size of polls won’t be increasing from here, though it might see a little more separation. It probably won’t get there very quickly, though – not with a field of this size.