Using Leading Indicators for Trading OTM Binaries | Nadex

The Next Processor Change is Within ARMs Reach

As you may have seen, I sent the following Tweet: “The Apple ARM MacBook future is coming, maybe sooner than people expect” https://twitter.com/choco_bit/status/1266200305009676289?s=20
Today, I would like to further elaborate on that.
tl;dr Apple will be moving to Arm based macs in what I believe are 4 stages, starting around 2015 and ending around 2023-2025: Release of T1 chip Macbooks, release of T2 chip Macbooks, Release of at least one lower end model Arm Macbook, and transitioning full lineup to Arm. Reasons for each are below.
Apple is very likely going to switch to switch their CPU platform to their in-house silicon designs with an ARM architecture. This understanding is a fairly common amongst various Apple insiders. Here is my personal take on how this switch will happen and be presented to the consumer.
The first question would likely be “Why would Apple do this again?”. Throughout their history, Apple has already made two other storied CPU architecture switches - first from the Motorola 68k to PowerPC in the early 90s, then from PowerPC to Intel in the mid 2000s. Why make yet another? Here are the leading reasons:
A common refrain heard on the Internet is the suggestion that Apple should switch to using CPUs made by AMD, and while this has been considered internally, it will most likely not be chosen as the path forward, even for their megalithic giants like the Mac Pro. Even though AMD would mitigate Intel’s current set of problems, it does nothing to help the issue of the x86_64 architecture’s problems and inefficiencies, on top of jumping to a platform that doesn’t have a decade of proven support behind it. Why spend a lot of effort re-designing and re- optimizing for AMD’s platform when you can just put that effort into your own, and continue the vertical integration Apple is well-known for?
I believe that the internal development for the ARM transition started around 2015/2016 and is considered to be happening in 4 distinct stages. These are not all information from Apple insiders; some of these these are my own interpretation based off of information gathered from supply-chain sources, examination of MacBook schematics, and other indicators from Apple.

Stage1 (from 2014/2015 to 2017):

The rollout of computers with Apple’s T1 chip as a coprocessor. This chip is very similar to Apple’s T8002 chip design, which was used for the Apple Watch Series 1 and Series 2. The T1 is primarily present on the first TouchID enabled Macs, 2016 and 2017 model year MacBook Pros.
Considering the amount of time required to design and validate a processor, this stage most likely started around 2014 or 2015, with early experimentation to see whether an entirely new chip design would be required, or if would be sufficient to repurpose something in the existing lineup. As we can see, the general purpose ARM processors aren’t a one- trick pony.
To get a sense of the decision making at the time, let’s look back a bit. The year is 2016, and we're witnessing the beginning of stagnation of Intel processor lineup. There is not a lot to look forward to other than another “+” being added to the 14nm fabrication process. The MacBook Pro has used the same design for many years now, and its age is starting to show. Moving to AMD is still very questionable, as they’ve historically not been able to match Intel’s performance or functionality, especially at the high end, and since the “Ryzen” lineup is still unreleased, there is absolutely no benchmarks or other data to show they are worth consideration, and AMD’s most recent line of “Bulldozer” processors were very poorly received. Now is probably as good a time as any to begin experimenting with the in-house ARM designs, but it’s not time to dive into the deep end yet, our chips are not nearly mature enough to compete, and it’s not yet certain how long Intel will be stuck in the mud. As well, it is widely understood that Apple and Intel have an exclusivity contract in exchange for advantageous pricing. Any transition would take considerable time and effort, and since there are no current viable alternative to Intel, the in-house chips will need to advance further, and breaching a contract with Intel is too great a risk. So it makes sense to start with small deployments, to extend the timeline, stretch out to the end of the contract, and eventually release a real banger of a Mac.
Thus, the 2016 Touch Bar MacBooks were born, alongside the T1 chip mentioned earlier. There are good reasons for abandoning the piece of hardware previously used for a similar purpose, the SMC or System Management Controller. I suspect that the biggest reason was to allow early analysis of the challenges that would be faced migrating Mac built- in peripherals and IO to an ARM-based controller, as well as exploring the manufacturing, power, and performance results of using the chips across a broad deployment, and analyzing any early failure data, then using this to patch any issues, enhance processes, and inform future designs looking towards the 2nd stage.
The former SMC duties now moved to T1 includes things like
The T1 chip also communicates with a number of other controllers to manage a MacBook’s behavior. Even though it’s not a very powerful CPU by modern standards, it’s already responsible for a large chunk of the machine’s operation. Moving control of these peripherals to the T1 chip also brought about the creation of the fabled BridgeOS software, a shrunken-down watchOS-based system that operates fully independently of macOS and the primary Intel processor.
BridgeOS is the first step for Apple’s engineering teams to begin migrating underlying systems and services to integrate with the ARM processor via BridgeOS, and it allowed internal teams to more easily and safely develop and issue firmware updates. Since BridgeOS is based on a standard and now well-known system, it means that they can leverage existing engineering expertise to flesh out the T1’s development, rather than relying on the more arcane and specialized SMC system, which operates completely differently and requires highly specific knowledge to work with. It also allows reuse of the same fabrication pipeline used for Apple Watch processors, and eliminated the need to have yet another IC design for the SMC, coming from a separate source, to save a bit on cost.
Also during this time, on the software side, “Project Marzipan”, today Catalyst, came into existence. We'll get to this shortly.
For the most part, this Stage 1 went without any major issues. There were a few firmware problems at first during the product launch, but they were quickly solved with software updates. Now that engineering teams have had experience building for, manufacturing, and shipping the T1 systems, Stage 2 would begin.

Stage2 (2018-Present):

Stage 2 encompasses the rollout of Macs with the T2 coprocessor, replacing the T1. This includes a much wider lineup, including MacBook Pro with Touch Bar, starting with 2018 models, MacBook Air starting with 2018 models, the iMac Pro, the 2019 Mac Pro, as well as Mac Mini starting in 2018.
With this iteration, the more powerful T8012 processor design was used, which is a further revision of the T8010 design that powers the A10 series processors used in the iPhone 7. This change provided a significant increase in computational ability and brought about the integration of even more devices into T2. In addition to the T1’s existing responsibilities, T2 now controls:
Those last 2 points are crucial for Stage 2. Under this new paradigm, the vast majority of the Mac is now under the control of an in-house ARM processor. Stage 2 also brings iPhone-grade hardware security to the Mac. These T2 models also incorporated a supported DFU (Device Firmware Update, more commonly “recovery mode”), which acts similarly to the iPhone DFU mode and allows restoration of the BridgeOS firmware in the event of corruption (most commonly due to user-triggered power interruption during flashing).
Putting more responsibility onto the T2 again allows for Apple’s engineering teams to do more early failure analysis on hardware and software, monitor stability of these machines, experiment further with large-scale production and deployment of this ARM platform, as well as continue to enhance the silicon for Stage 3.
A few new user-visible features were added as well in this stage, such as support for the passive “Hey Siri” trigger, and offloading image and video transcoding to the T2 chip, which frees up the main Intel processor for other applications. BridgeOS was bumped to 2.0 to support all of these changes and the new chip.
On the macOS software side, what was internally known as Project Marzipan was first demonstrated to the public. Though it was originally discovered around 2017, and most likely began development and testing within later parts of Stage 1, its effects could be seen in 2018 with the release of iPhone apps, now running on the Mac using the iOS SDKs: Voice Recorder, Apple News, Home, Stocks, and more, with an official announcement and public release at WWDC in 2019. Catalyst would come to be the name of Marzipan used publicly. This SDK release allows app developers to easily port iOS apps to run on macOS, with minimal or no code changes, and without needing to develop separate versions for each. The end goal is to allow developers to submit a single version of an app, and allow it to work seamlessly on all Apple platforms, from Watch to Mac. At present, iOS and iPadOS apps are compiled for the full gamut of ARM instruction sets used on those devices, while macOS apps are compiled for x86_64. The logical next step is to cross this bridge, and unify the instruction sets.
With this T2 release, the new products using it have not been quite as well received as with the T1. Many users have noticed how this change contributes further towards machines with limited to no repair options outside of Apple’s repair organization, as well as some general issues with bugs in the T2.
Products with the T2 also no longer have the “Lifeboat” connector, which was previously present on 2016 and 2017 model Touch Bar MacBook Pro. This connector allowed a certified technician to plug in a device called a CDM Tool (Customer Data Migration Tool) to recover data off of a machine that was not functional. The removal of this connector limits the options for data recovery in the event of a problem, and Apple has never offered any data recovery service, meaning that a irreparable failure of the T2 chip or the primary board would result in complete data loss, in part due to the strong encryption provided by the T2 chip (even if the data got off, the encryption keys were lost with the T2 chip). The T2 also brought about the linkage of component serial numbers of certain internal components, such as the solid state storage, display, and trackpad, among other components. In fact, many other controllers on the logic board are now also paired to the T2, such as the WiFi and Bluetooth controller, the PMIC (Power Management Controller), and several other components. This is the exact same system used on newer iPhone models and is quite familiar to technicians who repair iPhone logic boards. While these changes are fantastic for device security and corporate and enterprise users, allowing for a very high degree of assurance that devices will refuse to boot if tampered with in any way - even from storied supply chain attacks, or other malfeasance that can be done with physical access to a machine - it has created difficulty with consumers who more often lack the expertise or awareness to keep critical data backed up, as well as the funds to perform the necessary repairs from authorized repair providers. Other issues reported that are suspected to be related to T2 are audio “cracking” or distortion on the internal speakers, and the BridgeOS becoming corrupt following a firmware update resulting in a machine that can’t boot.
I believe these hiccups will be properly addressed once macOS is fully integrated with the ARM platform. This stage of the Mac is more like a chimera of an iPhone and an Intel based computer. Technically, it does have all of the parts of an iPhone present within it, cellular radio aside, and I suspect this fusion is why these issues exist.
Recently, security researchers discovered an underlying security problem present within the Boot ROM code of the T1 and T2 chip. Due to being the same fundamental platform as earlier Apple Watch and iPhone processors, they are vulnerable to the “checkm8” exploit (CVE-2019-8900). Because of how these chips operate in a Mac, firmware modifications caused by use of the exploit will persist through OS reinstallation and machine restarts. Both the T1 and T2 chips are always on and running, though potentially in a heavily reduced power usage state, meaning the only way to clean an exploited machine is to reflash the chip, triggering a restart, or to fully exhaust or physically disconnect the battery to flush its memory. Fortunately, this exploit cannot be done remotely and requires physical access to the Mac for an extended duration, as well as a second Mac to perform the change, so the majority of users are relatively safe. As well, with a very limited execution environment and access to the primary system only through a “mailbox” protocol, the utility of exploiting these chips is extremely limited. At present, there is no known malware that has used this exploit. The proper fix will come with the next hardware revision, and is considered a low priority due to the lack of practical usage of running malicious code on the coprocessor.
At the time of writing, all current Apple computers have a T2 chip present, with the exception of the 2019 iMac lineup. This will change very soon with the expected release of the 2020 iMac lineup at WWDC, which will incorporate a T2 coprocessor as well.
Note: from here on, this turns entirely into speculation based on info gathered from a variety of disparate sources.
Right now, we are in the final steps of Stage 2. There are strong signs that an a MacBook (12”) with an ARM main processor will be announced this year at WWDC (“One more thing...”), at a Fall 2020 event, Q1 2021 event, or WWDC 2021. Based on the lack of a more concrete answer, WWDC2020 will likely not see it, but I am open to being wrong here.

Stage3 (Present/2021 - 2022/2023):

Stage 3 involves the first version of at least one fully ARM-powered Mac into Apple’s computer lineup.
I expect this will come in the form of the previously-retired 12” MacBook. There are rumors that Apple is still working internally to perfect the infamous Butterfly keyboard, and there are also signs that Apple is developing an A14x based processors with 8-12 cores designed specifically for use as the primary processor in a Mac. It makes sense that this model could see the return of the Butterfly keyboard, considering how thin and light it is intended to be, and using an A14x processor would make it will be a very capable, very portable machine, and should give customers a good taste of what is to come.
Personally, I am excited to test the new 12" “ARMbook”. I do miss my own original 12", even with all the CPU failure issues those older models had. It was a lovely form factor for me.
It's still not entirely known whether the physical design of these will change from the retired version, exactly how many cores it will have, the port configuration, etc. I have also heard rumors about the 12” model possibly supporting 5G cellular connectivity natively thanks to the A14 series processor. All of this will most likely be confirmed soon enough.
This 12” model will be the perfect stepping stone for stage 3, since Apple’s ARM processors are not yet a full-on replacement for Intel’s full processor lineup, especially at the high end, in products such as the upcoming 2020 iMac, iMac Pro, 16” MacBook Pro, and the 2019 Mac Pro.
Performance of Apple’s ARM platform compared to Intel has been a big point of contention over the last couple years, primarily due to the lack of data representative of real-world desktop usage scenarios. The iPad Pro and other models with Apple’s highest-end silicon still lack the ability to execute a lot of high end professional applications, so data about anything more than video editing and photo editing tasks benchmarks quickly becomes meaningless. While there are completely synthetic benchmarks like Geekbench, Antutu, and others, to try and bridge the gap, they are very far from being accurate or representative of the real real world performance in many instances. Even though the Apple ARM processors are incredibly powerful, and I do give constant praise to their silicon design teams, there still just isn’t enough data to show how they will perform for real-world desktop usage scenarios, and synthetic benchmarks are like standardized testing: they only show how good a platform is at running the synthetic benchmark. This type of benchmark stresses only very specific parts of each chip at a time, rather than how well it does a general task, and then boil down the complexity and nuances of each chip into a single numeric score, which is not a remotely accurate way of representing processors with vastly different capabilities and designs. It would be like gauging how well a person performs a manual labor task based on averaging only the speed of every individual muscle in the body, regardless of if, or how much, each is used. A specific group of muscles being stronger or weaker than others could wildly skew the final result, and grossly misrepresent performance of the person as a whole. Real world program performance will be the key in determining the success and future of this transition, and it will have to be great on this 12" model, but not just in a limited set of tasks, it will have to be great at *everything*. It is intended to be the first Horseman of the Apocalypse for the Intel Mac, and it better behave like one. Consumers have been expecting this, especially after 15 years of Intel processors, the continued advancement of Apple’s processors, and the decline of Intel’s market lead.
The point of this “demonstration” model is to ease both users and developers into the desktop ARM ecosystem slowly. Much like how the iPhone X paved the way for FaceID-enabled iPhones, this 12" model will pave the way towards ARM Mac systems. Some power-user type consumers may complain at first, depending on the software compatibility story, then realize it works just fine since the majority of the computer users today do not do many tasks that can’t be accomplished on an iPad or lower end computer. Apple needs to gain the public’s trust for basic tasks first, before they will be able to break into the market of users performing more hardcore or “Pro” tasks. This early model will probably not be targeted at these high-end professionals, which will allow Apple to begin to gather early information about the stability and performance of this model, day to day usability, developmental issues that need to be addressed, hardware failure analysis, etc. All of this information is crucial to Stage 4, or possibly later parts of Stage 3.
The 2 biggest concerns most people have with the architecture change is app support and Bootcamp.
Any apps released through the Mac App Store will not be a problem. Because App Store apps are submitted as LLVM IR (“Bitcode”), the system can automatically download versions compiled and optimized for ARM platforms, similar to how App Thinning on iOS works. For apps distributed outside the App Store, thing might be more tricky. There are a few ways this could go:
As for Bootcamp, while ARM-compatible versions of Windows do exist and are in development, they come with their own similar set of app support problems. Microsoft has experimented with emulating x86_64 on their ARM-based Surface products, and some other OEMs have created their own Windows-powered ARM laptops, but with very little success. Performance is a problem across the board, with other ARM silicon not being anywhere near as advanced, and with the majority of apps in the Windows ecosystem that were not developed in-house at Microsoft running terribly due to the x86_64 emulation software. If Bootcamp does come to the early ARM MacBook, it more than likely will run like very poorly for anything other than Windows UWP apps. There is a high chance it will be abandoned entirely until Windows becomes much more friendly to the architecture.
I believe this will also be a very crucial turning point for the MacBook lineup as a whole. At present, the iPad Pro paired with the Magic Keyboard is, in many ways, nearly identical to a laptop, with the biggest difference being the system software itself. While Apple executives have outright denied plans of merging the iPad and MacBook line, that could very well just be a marketing stance, shutting the down rumors in anticipation of a well-executed surprise. I think that Apple might at least re-examine the possibility of merging Macs and iPads in some capacity, but whether they proceed or not could be driven by consumer reaction to both products. Do they prefer the feel and usability of macOS on ARM, and like the separation of both products? Is there success across the industry of the ARM platform, both at the lower and higher end of the market? Do users see that iPadOS and macOS are just 2 halves of the same coin? Should there be a middle ground, and a new type of product similar to the Surface Book, but running macOS? Should Macs and iPads run a completely uniform OS? Will iPadOS ever see exposed the same sort of UNIX-based tools for IT administrators and software developers that macOS has present? These are all very real questions that will pop up in the near future.
The line between Stage 3 and Stage 4 will be blurry, and will depend on how Apple wishes to address different problems going forward, and what the reactions look like. It is very possible that only 12” will be released at first, or a handful more lower end model laptop and desktop products could be released, with high performance Macs following in Stage 4, or perhaps everything but enterprise products like Mac Pro will be switched fully. Only time will tell.

Stage 4 (the end goal):

Congratulations, you’re made it to the end of my TED talk. We are now well into the 2020s and COVID-19 Part 4 is casually catching up to the 5G = Virus crowd. All Macs have transitioned fully to ARM. iMac, MacBooks Pro and otherwise, Mac Pro, Mac Mini, everything. The future is fully Apple from top to bottom, and vertical integration leading to market dominance continues. Many other OEM have begun to follow in this path to some extent, creating more demand for a similar class of silicon from other firms.
The remainder here is pure speculation with a dash of wishful thinking. There are still a lot of things that are entirely unclear. The only concrete thing is that Stage 4 will happen when everything is running Apple’s in- house processors.
By this point, consumers will be quite familiar with the ARM Macs existing, and developers have had have enough time to transition apps fully over to the newly unified system. Any performance, battery life, or app support concerns will not be an issue at this point.
There are no more details here, it’s the end of the road, but we are left with a number of questions.
It is unclear if Apple will stick to AMD's GPUs or whether they will instead opt to use their in-house graphics solutions that have been used since the A11 series of processors.
How Thunderbolt support on these models of Mac will be achieved is unknown. While Intel has made it openly available for use, and there are plans to have USB and Thunderbolt combined in a single standard, it’s still unclear how it will play along with Apple processors. Presently, iPhones do support connecting devices via PCI Express to the processor, but it has only been used for iPhone and iPad storage. The current Apple processors simply lack the number of lanes required for even the lowest end MacBook Pro. This is an issue that would need to be addressed in order to ship a full desktop-grade platform.
There is also the question of upgradability for desktop models, and if and how there will be a replaceable, socketed version of these processors. Will standard desktop and laptop memory modules play nicely with these ARM processors? Will they drop standard memory across the board, in favor of soldered options, or continue to support user-configurable memory on some models? Will my 2023 Mac Pro play nicely with a standard PCI Express device that I buy off the shelf? Will we see a return of “Mac Edition” PCI devices?
There are still a lot of unknowns, and guessing any further in advance is too difficult. The only thing that is certain, however, is that Apple processors coming to Mac is very much within arm’s reach.
submitted by Fudge_0001 to apple [link] [comments]

Invisible Object Culling In Quake Related Engines (REVISED)

Prologue
Despite all these great achievements in video cards development and the sworn assurances of developers about drawing 2 to 3 million polygons on screen without a significant FPS drop, it’s not all that rosy in reality. It depends on methods of rendering, on the number of involved textures and on the complexity and number of involved shaders. So even if all this really does ultimately lead to high performance, it only happens in the demos that developerss themselves kindly offer. In these demos, some "spherical dragons in vacuum" made of a good hundred thousand polygons are drawn very quickly indeed. However, the real ingame situation for some reason never looks like this funny dragon from a demo, and as a result many comrades abandon the development of their "Crysis killer" as soon as they can render a single room with a couple of light sources, because for some reason FPS in this room fluctuate around 40-60 even on their 8800GTS and upon creating second room it drops to a whopping 20. Of course with problems like this, it would be incorrect to say how things aren’t that bad and how the trouble of such developers are purely in their absence of correctly implemented culling, and how it is time for them to read this article. But for those who have already overcome “the first room syndrome" and tried to draw – inferior though, but, anyway - the world, this problem really is relevant.
However, it should be borne in mind that QUAKE, written in ancient times, was designed for levels of a “corridor" kind exclusively; therefore methods of clipping discussed in this article are not applicable to landscapes, such as ones from STALKER or Crysis, since completely different methods work there, whose analysis is beyond the scope of this article. Meanwhile we’ll talk about the classic corridor approach to mapping and the effective clipping of invisible surfaces, as well as clipping of entire objects.

The paper tree of baloon leaves

As you probably know, QUAKE uses BSP, Binary Spacing Partition tree. This is a space indexing algorithm, and BSP itself doesn’t care if the space is open or closed, it doesn’t even care if the map is sealed, it can be anything. BSP implies the division of a three-dimensional object into a certain number of secant planes called "the branches" or "the nodes" and volumetric areas or rooms called "the leaves". The names are confusing as you can see. In QUAKE / QUAKE2 the branches usually contain information about the surfaces that this branch contain, and the leaves are an empty space, not filled with nothing. Although sometimes leaves may contain water for example (in a form of a variable that indicates, specifically, that we’ve got water in this leaf). Also, the leaf contains a pointer to the data of potential visibility (Potentially Visible Set, PVS) and a list of all surfaces that are marked as being visible from this leaf. Actually the approach itself implies that we are able to draw our world however we prefer, either using leaves only or using branches only. This is especially noticeable in different versions of QUAKE: for example, in QUAKE1 in a leaf we just mark our surfaces as visible and then we also sequentially go through all the surfaces visible from a particular branch, assembling chains of surfaces to draw them later. But in QUAKE3, we can accumulate visible surfaces no sooner than we’ll get into the leaf itself.
In QUAKE and QUAKE2, all surfaces must lie on the node, which is why the BSP tree grows rather quickly, but in exchange this makes it possible to trace these surfaces by simply moving around the tree, not wasting time to check each surface separately, which affects the speed of the tracer positively. Because of this, unique surface is linked to each node (the original surface is divided into several if necessary) so in the nodes we always have what is known to be visible beforehand, and therefore we can perform a recursive search on the tree using the BBox pyramid of frustum as a direction of our movement along the BSP tree (SV_RecursiveWorldNode function).
In QUAKE3, the tree was simplified and it tries to avoid geometry cuts as much as possible (a BSP tree is not even obliged to cut geometry, such cuts are but a matter of optimality of such a tree). And surfaces in QUAKE3 do not lie on the node because patches and triangle models lie there instead. But what happens would they be put on the node nevertheless, you can see on the example of "The Edge Of Forever" map that I compiled recently for an experimental version of Xash. Turns out, in places that had a couple thousand visible nodes and leaves in the original, there are almost 170 thousand of them with a new tree. And this is the result after all the preliminary optimizations, otherwise it could have been even more, he-he. Yeah, so... For this reason, the tree in QUAKE3 does not put anything on the node and we certainly do need to get into the leaf, mark visible surfaces in it and add them to the rendering list. On the contrary, in QUAKE / QUAKE2 going deep down to the leaf itself is not necessary.
Invisible polygon cutoff (we are talking about world polys, separate game objects will be discussed a bit later) is based on two methods:
The first method is to use bit-vectors of visibility (so-called PVS - Potential Visible Set). The second method is regular frustum culling which actually got nothing to do with BSP but works just as efficiently, for a certain number of conditions of course. Bottom line: together these two methods provide almost perfect clipping of invisible polygons, drawing a very small visible piece out of the vast world. Let's take a closer look at PVS and how it works.

When FIDO users get drunk

Underlying idea of PVS is to expose the fact that one leaf is visible from another. For BSP alone it’s basically impossible because leaves from completely different branches can be visible at the same time and you will never find a way to identify the pattern for leafs from different branches seeing each other - it simply doesn’t exist. Therefore, the compiler has to puff for us, manually checking the visibility of all leaves from all leaves. Information about visibility in this case is scanty: one Boolean variable with possible values 0 and 1. 0 means that leaf is not visible and 1 means that leaf is visible. It is easy to guess that for each leaf there is a unique set of such Boolean variables the size of the total number of leaves on the map. So a set like this but for all the leaves will take an order of magnitude more space: the number of leaves multiplied by the number of leaves and multiplied by the size of our variable in which we will store information of visibility (0 \ 1).
And the number of leaves, as you can easily guess, is determined by map size map and by the compiler, which upon reaching a certain map size, cease to divide the world into leaves and treat resulting node as a leaf. Leaf size vary for different QUAKE. For example, in QUAKE1 leaves are very small. For example I can tell you that the compiler divide standard boxmap in QUAKE1 into as many as four leaves meanwhile in QUAKE3 similar boxmap takes only one leaf. But we digress.
Let's estimate the size of our future PVS file. Suppose we have an average map and it has a couple thousand leaves. Would we imagine that the information about the leaf visibility is stored in a variable of char type (1 byte) then the size of visdata for this level would be, no more no less, almost 4 megabytes. That is, much AF. Of course an average modern developer would shrug and pack the final result into zip archive but back in 1995 end users had modest machines, their memory was low and therefore visdata was packed in “more different” ways. The first step in optimizing is about storing data not in bytes, but in bits. It is easy to guess that such approach reduce final result as much as 8 times and what's typical AF – does it without any resource-intensive algorithms like Huffman trees. Although in exchange, such approach somewhat worsened code usability and readability. Why am I writing this? Due to many developers’ lack of understanding for conditions in code like this:
if ( pvs [ leafnum >> 3 ] & ( 1 << ( leafnum & 7 ) ) ) { } 
Actually, this condition implement simple, beautiful and elegant access to the desired bit in the array (as one can recall, addressing less than one byte is impossible and you can only work with them via bit operations)

Titans that keep the globe spinning

The visible part of the world is cut off in the same fashion: we find the current leaf where the player is located (in QUAKE this is implemented by the Mod_PointInLeaf function) then we get a pointer to visdata for the current leaf (for our convenience, it is linked directly to the leaf in the form of "compressed_vis" pointer) and then stupidly go through all the leaves and branches of the map and check them for being visible from our leaf (this can be seen in the R_MarkLeaves function). As long as some leaves turn out to be visible from the current leaf we assign them a unique number from "r_visframecount" sequence which increases by one every frame. Thus, we emphasize that this leaf is visible when we build the current frame. In the next frame, "r_framecount" is incremented by one and all the leaves are considered invisible again. As one can understand, this is much more convenient and much faster than revisiting all the leaves at the end of each frame and zeroing their "visible" variable. I drew attention to this feature because this mechanism also bothers some and they don’t understand how it works.
The R_RecursiveWorldNode function “walk” along leaves and branches marked this way. It cuts off obviously invisible leaves and accumulate a list of surfaces from visible ones. Of course the first check is done for the equivalence of r_visframecount and visframe for the node in question. Then the branch undergoes frustum pyramid check and if this check fails then we don’t climb further along this branch. Having stumbled upon a leaf, we mark all its surfaces visible the same way, assigning the current r_framecount value to the visframe variable (in the future this will help us to determine quickly whether a certain surface is visible in the current frame). Then, using a simple function, we determine which side we are from the plane of our branch (each branch has its own plane, literally called “plane” in the code) and, again, for now, we just take all surfaces linked to this branch and add them to the drawing chain (so-called “texturechain”), although nobody can actually stop us from drawing them immediately, right there, (in QUAKE1 source code one can see both options) having previously checked these surfaces for clipping with the frustum pyramid, or at least having made sure that the surface faces us.
In QUAKE, each surface has a special flag SURF_PLANEBACK which help us determine the orientation of the surface. But in QUAKE3 there is no such flag anymore, and clipping of invisible surfaces is not as efficient, sending twice as many surfaces for rendering. However, their total number after performing all the checks is not that great. However, whatever one may say, adding this check to Xash3D raised average FPS almost one and half times in comparison to original Half-Life. This is on the matter whether it is beneficial. But we digress.
So after chaining and drawing visible surfaces, we call R_RecursiveWorldNode again but now - for the second of two root branches of BSP tree. Just in case. Because the visible surfaces, too, may well be there. When the recursion ends, the result will either be a whole rendered world, or chains of visible surfaces at least. This is what can actually be sent for rendering with OpenGL or Direct3D, well, if we did not draw our world right in the R_RecursiveWorldNode function of course. Actually this method with minor upgrades successfully used in all three QUAKEs.

A naked man is in a wardrobe because he's waiting for a tram

One of the upgrades is utilization of the so-called areaportals. This is another optimization method coming straight out of QUAKE2. The point of using areaportals is about game logic being able to turn the visibility of an entire sectors on and off at its discretion. Technically, this is achieved as follows: the world is divided into zones similar to the usual partitioning along the BSP tree, however, there can’t be more than 256 of them (later I will explain why) and they are not connected in any way.
Regular visibility is determined just like in QUAKE; however, by installing a special “func_areaportal” entity we can force the compiler to split an area in two. This mechanism operates on approximately the same principle as the algorithm of searching for holes in the map, so you won’t deceive the compiler by putting func_areaportal in a bare field - the compiler will simply ignore it. Although if you make areaportal the size of the cross-section of this field (to the skybox in all directions) in spite of everything the zones will be divided. We can observe this technique in Half-Life 2 where an attempt to return to old places (with cheats for example) shows us disconnected areaportals and a brief transition through the void from one zone to another. Actually, this mechanism helped Half-Life 2 simulate large spaces successfully and still use BSP level structure (I have already said that BSP, its visibility check algorithm to be precise, is not very suitable for open spaces).
So installed areaportal forcibly breaks one zone into two, and the rest of the zoneization is at the discretion of the compiler, which at the same time makes sure not to exceed 256 zones limit, so their sizes can be completely different. Well, I repeat, it depends on the overall size of the map. Our areaportal is connected to some door dividing these two zones. When the door is closed - it turns areaportal off and the zones are separated from each other. Therefore, if the player is not in the cut off zone, then rendering it is not worth it. In QUAKE, we’d have to do a bunch of checks and it’s possible that we could only cut off a fraction of the number of polygons (after all, the door itself is not an obstacle for either visibility check, or even more so, nor it is for frustum). Compare to case in point: one command is issued - and the whole room is excluded from visibility. “Not bad,” you’d say, “but how would the renderer find out? After all, we performed all our operations on the server and the client does not know anything about it.” And here we go back to the question why there can’t be more than 256 zones.
The point is, information about all of zone visibility is, likewise, packaged in bit flags (like PVS) and transmitted to the client in a network message. Dividing 256 bits by 8 makes 32 bytes, which generally isn’t that much. In addition, the tail of this information can be cut off at ease if it contains zeroes only. Though the payback for such an optimization would appear as an extra byte that will have to be transmitted over the network to indicate the actual size of the message about the visibility of our zones. But, in general, this approach justified.

Light_environment traces enter from the back

Source Engine turned out to have a terrible bug which makes the whole areaportal thing nearly meaningless. Numerous problems arise because of it: water breaks down into segments that pop in, well, you should be familiar with all this by now. Areaportal cuts the geometry unpredictably, like an ordinary secant plane, but its whole point is being predictable! Whereas areaportal brushes in Source Engine have absolutely no priority in splitting the map. It should be like this: first, the tree is cut the regular way. And when no suitable planes left, the final secant plane of areaportal is used. This is the only way to cut the sectors correctly.

Modern problems

The second optimization method, as I said, is increased size of the final leaf akin to QUAKE3. It is believed that a video card would draw a certain amount of polygons much faster than the CPU would check whether they are visible. This come from the very concept of visibility check: if visibility check takes longer than direct rendering, then well, to hell with this check. The controversy of this approach is determined by a wide range of video cards present at the hands of the end users, and it is strongly determined by the surging fashion for laptops and netbooks in which a video card is a very conditional and very weak concept (don’t even consider its claimed Shader Model 3 support). Therefore, for desktop gaming machines it would be more efficient to draw more at a time, but for weak video cards of laptops traditional culling will remain more reliable. Even if it is such a simple culling as I described earlier.

Decompression sickness simulator

Although I should also mention the principles of frustum culling, perhaps they are incomprehensible to some. Cutoff by frustum pyramid is actually pure mathematics without any compiler calculations. From the current direction of the player’s gaze, a clipping pyramid is built (the tip of the pyramid – in case someone can’t understand - is oriented towards the player’s point of view and its base is oriented in the direction of player’s view). The angle between the walls of the pyramid can be sharp or blunt - as you probably guessed already, it depends on the player's FOV. In addition, the player can forcefully pull the far wall of the pyramid closer to himself (yes, this is the notorious “MaxRange” parameter in the “worldspawn” menu of the map editor). Of course, OpenGL also builds a similar pyramid for its internal needs when it takes information from the projection matrix but we’re talking local pyramid now. The finished pyramid consists of 4-6 planes (QUAKE uses only 4 planes and trusts OpenGL to independently cut far and near polygons, but if you write your own renderer and intend to support mirrors and portals you will definitely need all six planes). Well, the frustum test itself is an elementary check for a presence of AA-box (AABB, Axis Aligned Bounding Box) in the frustum pyramid. Or speaking more correctly, this is a check for their intersection. Let me remind you that each branch has its own dimensions (a fragment of secant plane bound by neighboring perpendicular secant planes) which are checked for intersection. But unfortunately the frustum test has one fundamental drawback - it cannot cut what is directly in the player’s view. We can adjust the cutoff distance, we can even make that “ear feint” like they do in QFusion where final zFar value is calculated in each frame before rendering and then taken into account in entity clipping, but after all, whatever they say, the value itself was obtained from PVS-information. Therefore, neither of two methods can replace the other but they just complement each other. This should be remembered.

I gotta lay off the pills I'm taking

It seems that we figured out the rendering of the world and now we are moving on smoothly to cutting off moving objects... which are all the visible objects in the world! Even ones that, at te first glance, stand still and aren’t planning to move anywhere. Cause the player moves! From one point he still sees a certain static object, and from another point, of course, he no longer does. This detail should also be considered.
Actually, at the beginning of this article I already spoke in detail about an algorithm of objects’ visibility check: first we find the visible leaf for the player, then we find the visible leaf for the entity and then we check by visdata whether they see each other. I, too, would like to clarify (if someone suddenly does not understand) how each moving entity is given the number of its current visible leaf, i.e. directly for entity’s its own current position, and the leaves themselves are of course static and always in the same place.

Ostrich is such an OP problem solver

So the method described above has two potential problems:
The first problem is that even if A equals B, then, oddly enough, B is far from being always equal A. In other words, entity A can see entity B, but this does not mean that entity B see entity A, and, no, it’s not about one of them “looking” away. So why is this happening? Most often for two reasons:
The first reason is that one of the entities’ ORIGIN sit tight inside the wall and the Mod_PointInLeaf function for it points to the outer “zero” leaf from which EVERYTHING is visible (haven’t any of you ever flown around the map?). Meanwhile, no leaf inside the map can see outer leaf - these two features actually explain an interesting fact of an entire world geometry becoming visible and on the contrary, all objects disappearing when you fly outside the map. In regular mode, similar problems can occur for objects attached to the wall or recessed into the wall. For example, sometimes the sounds of a pressed button or opening door disappear because its current position went beyond the world borders. This phenomenon is fought by interchanging objects A and B or by obtaining alternative points for the position of an object, but all the same, it’s all not very reliable.

But lawyer said that you don't exist

In addition, as I said, there is another problem. It come from the fact that not every entity fits a single leaf. Only the player is so small that he can always be found in one leaf only (well, in the most extreme case - in two leaves on the border of water and air. This phenomenon is fought with various hacks btw), but some giant hentacle or on the contrary, an elevator made as a door entity, can easily occupy 30-40 leaves at a time. An attempt to check one leaf (for example, one where the center of the model is) will inevitably lead to a deplorable result: as soon as the center of an object will be out of the player’s visibility range, the entire object will disappear completely. The most common case is the notorious func_door used as an elevator. There is one in QUAKE on the E1M1. Observe: it travels halfway and then its ORIGIN is outside the map and therefore it must disappear from the player’s field of view. However, it does not go anywhere, right? Let us see in greater detail how this is done.
The simplest idea that comes to one’s mind: since the object occupies several leaves, we have to save them all somewhere in the structure of an object in the code and check them one by one. If at least one of these leaves is visible, then the whole object is visible (for example, it’s very tip). This is exactly what was implemented in QUAKE: a static array for 16 leaves and a simple recursive function SV_FindTouchedLeafs that looks for all the leaves in range hardcoded in "pev->absmins" and "pev->absmax" variables (pev i.e. a Pointer to EntVars_t table). absmins and absmax are recalculated each time SV_LinkEdict (or its more specific case of UTIL_SetOrigin) is called. Hence the quite logical conclusion that a simple change of ORIGIN without recalculating its visible leaf will take the object out of visibility sooner or later even if, surprisingly enough, it’s right in front of the player and the player should technically still be able to see it. Inb4 why one have to call UTIL_SetOrigin and wouldn’t it be easier to just assign new value to the "pev->origin" vector without calling this function. It wouldn’t.
With this method we can solve both former problems perfectly: we can fight the loss of visibility if the object's ORIGIN went beyond the world borders and level the difference of visibility for A->B versus visibility for B->A.

A secret life of monster_tripmine

Actually we’ve yet to encounter another problem, but it does not occur immediately. Remember, we’ve got an array of 16 leaves. But what if it won’t be enough? Thank God there are no beams in QUAKE and no very long elevators made as func_door either. For this exact reason. Because when the array is filled to capacity, the SV_FindTouchedLeafs function just stop and we can only hope that there won’t be that many cases when an object disappear right before our eyes. But in the original QUAKE, such cases may well be. In Half-Life, the situation is even worse - as you can remember there are rays that can reach for half the map, tripmine rays for example. In this case, a situation may occur when we see just the very tip of the ray. For most of these rays, 16 leaves are clearly not enough. Valve tried to remedy the situation by increasing the array to 48 leaves. That helped. On early maps. If you remember, at the very beginning of the game when the player has already got off the trailer, he enters that epic elevator that takes him down. The elevator is made as a door entity and it occupies 48 leaves exactly. Apparently, the final expansion of the array was based after its dimensions. Then the programmers realized that this isn’t really a solution, because no matter how much one would expand the array, it can still be lacking for something. So then they screwed up an alternative method for visibility check: a head branch (headnode) check. In short, this is still the same SV_FindTouchedLeafs but now it is called directly from the place of visibility check and with a subsequent transfer of visdata in there. In general, it is not used very often because it is slower than checking pre-accumulated leaves, that is, it is intended just for such non-standard cases like this one.
Well, and since, I hope, general picture of the clipping mechanism already beginning to take shape in your mind, I will finish the article in just a few words.
On the server, all objects that have already passed the visibility check are added to the network message containing information about visible objects. Thus, on the client, the list of visible entities is already cut off by PVS and we do not have to do this again and therefore a simple frustum check is enough. You ask, "why did we have to cut off invisible objects on the server when we could do this later when we are on the client already?" I reply: yes, we could, but now the objects cut off on the server didn’t get into the network message and saved us some traffic. And since the player still does not see them, what is the point of transferring them to the client just to check them for visibility after? This is a kind of double optimizing :)
© Uncle Mike 2012
submitted by crystallize1 to hammer [link] [comments]

[LONG] My Story of Disillusionment with and Disappointment in the World and Myself

Intro.
This might be a long one. I hope someone reads the thing, I put like 3 hours into writing it. A brief story of my life and how it all led up to this moment, where I am disillusioned with my self-image, my life choices, and certain aspects of the world, and have no idea what to do next. Warning: this whole thing might be a little depressing to read.
Childhood.
I am a 20yo Russian male. During my childhood, I was made to believe that I am capable of doing something great and doing better than anyone. At the same time I developed a very non-conformist life stance and very often rejected things and ideas simply because they were too popular for my taste, and I couldn't feel special whilst enjoying them. Of course, in turn, society rejected me, as it does with anyone who doesn't play by the rules. Oh well.
My only redeeming quality was that I considered myself pretty smart. Which is even easier to assume, when at the same time you think that you're different from everyone else. Now, I know that to some extent, I was indeed smarter than most people in certain areas. Unlike most people I knew back then, often with bare minimum efforts I was able to maintain near perfect grades at school. I was also enjoying learning new things and reading more than an average person. So, let's just say, I had a basis to assume I was a smart dude.
I wasn't happy and content with my life, though. I never had real friends, because I only hung out with people when they were my classmates/roommates/co-workers, and after we parted ways, I rarely if ever contacted them afterwards. I always enjoyed doing things you usually do in solitude more, because when I was alone, I wouldn't be afraid that someone could hurt me for being different. Because of that, I was never in a romantic relationship.
High School.
Still, life was going okay. By the end of school, I kind of accepted my social deficiency and I wanted to focus on improving the world and become a successful person - for myself. I was facing a dilemma, though. Despite the fact that I was doing great in school, the idea of having to invest four years of my time into studying something really specific, and then having to work another 20-30 years on the same job was terrifying, because I had no idea what I liked to do! Nothing seemed interesting to me, I didn't have a passion for doing anything... Thanks to my video game addiction, which made me lazy as fuck, probably. I also needed to meet my criteria for success with my future job, which included being financially successful. I grew up in top 1% income family, so... I always felt the pressure to outperform or at least match my parents' income.
Enter trading. My dad discovered investing several years ago (we don't live in US, so most of the people aren't as financially savvy, so he never thought about investing before then). I was always curious about financial independence and markets, but now I was seeing it all done in front of me, I realized that it might be a good opportunity to make a lot of money and become successful without being socially adept, which is something absolutely required in business or politics. So, I asked my father to open a brokerage account for me in the US, and started swing trading (trading in weekly/monthly time frames). I could only trade slow and small because of the trade restrictions put on accounts <$25k and <21yo in the US. Still, it was going well, but in hindsight I was just lucky to be there during a great bull market.
Even before I thought trading and more importantly investing were the ways smart people make money. I thought simply because I was conventionally smart, I had a talent or an innate ability to pick innovative stocks and do venture investing when I grow some capital. I truly believed in that long before I was introduced to financial markets, I believed that my surface level understanding of multiple areas of cutting edge and emerging technology would give me an edge compared to all the other investors.
US Community College and Return Back.
In the end, I've decided I want to go to a US community college and study finance and become a trader and later an investor, but I didn't want to work for a fund or something like that (lazy ass). I wanted to use my knowledge and skill and my own money to grow my net worth and make a living. I didn't really like the process of trading, I just needed the money to live by while I was trying to figure out what else to do with my life. Because I thought I were smart, I thought this would come easily to me. Boy was I wrong. From the nicest of conditions in my hometown, I was suddenly moved into a foreign setting, on the other side of the planet away form my family and mates, with a video game addiction and laziness that ruined my daily routine and studying as well. The fact that I didn't like my major was not helping. My grades fell from A- in the first quarter to C+ in the last. I gained +30% from my normal weight. I was stressed out, not going outside and sitting at my computer desk for days at a time, skipping all the classes I could if they were not absolutely essential for my grades, living on prepared foods. I never got out of my shell and barely talked to anyone in English, all of my friends were Russian speaking. I wasted an opportunity to improve my speaking, although aside from that my English skills satisfy me.
By the end of community college, last summer, I was left with B grades that wouldn't let me transfer anywhere decent, and the extreme stress that I put myself through started taking a toll on my mental health. I was planning to take a break and go back to Russia for several months, and transfer back to a US uni this winter. Needless to say, you can't run from yourself. It didn't really become much better after a few months in Russia. I didn't want to study finance anymore, because it was boring and I was exhausted. I still had the video game addiction, still was lazy and gained some more extra pounds of weight. I was not sleeping at all, extremely sleep deprived for months. Because of this and lack of mental stimulation I started to become dumber. And all that was happening where I didn't really have to do anything: not study or work, just sit around the house and do whatever I wanted. Turns out, these conditions didn't help me to get out of the incoming depression.
Finally, around November, when I already sent out all of my transfer applications and already got some positive answers from several universities, I knew I didn't have much time left at home, and I had to leave soon. But I really, really didn't want to go back. It was scarier than the first time. I was afraid of new changes, I just wanted for the time to stop and letting me relax, heal... I was having suicidal thoughts and talked about it with my family and my therapist. They were all supportive and helped me as much as they could. But I was the only person who could really help myself. If I wanted to breathe freely, I had to admit defeat and not go back to the US to continue my education. It was extremely hard at first, but then I just let go. I decided to find a temporary job as an English tutor and give myself time to think. Then I remembered that I had a bunch of money in my trading account. I still thought that I was pretty smart, despite failing college, so I figured, why not try move it to Russian brokers who don't have trading restrictions, and do it full time? Which is exactly what I did. And I started to study trading all by myself at a fast pace. I was now trading full time and it was going sideways: +10% in December, -20% in January. Then, something incredible happened. I was already in a shitty place in life, but I still had some hope for my future. Things were about to get much worse. I'm in the late January, and I discovered for myself that the whole financial industry of the world was a fraud.
Brief Explanation of My Discoveries.
In the image of the financial industry, there are several levels of perceived credibility.
In the bottom tier, there is pure gambling. In my country, there were periods when binary options trading and unreliable Forex brokers were popular among common folk, but these were obvious and unsophisticated fraudsters who were one step away from being prosecuted. There are also cryptocurrencies that don't hold any value and are also used only for speculation/redistribution of wealth. There is also a wonderful gambling subreddit wallstreetbets where most users don't even try to hide the fact that what they are doing is pure gambling. I love it. But the thing is, this is trading/investing for the people who have no idea what it is, and most people discredit it as a fraud, which it, indeed, is. These examples are 99% marketing/public image and 1% finance. But these offer x10-1000 returns in the shortest time span. Typical get-rich-quick schemes, but they attract attention.
Then, there is trading tier. You can have multiple sub levels here, in the bottom of this tier we would probably have complex technical analysis (indicators) and daily trading/scalping. I was doing this in the DecembeJanuary. At the top would be people who do fundamental analysis (study financial reports) and position trade (monthly time frames). Now, there is constant debate in the trading community whether technical analysis or fundamental analysis is better. I have a solid answer to the question. They work in the same way. Or rather, they don't work at all.
You'd ask: "Why you didn't discover this earlier? You were in this financial thing for several years now!" Well, you see, unlike on the previous level, here millions of people say that they actually believe trading works and there is a way to use the available tools to have great returns. Some of these people actually know that trading doesn't work, but they benefit from other traders believing in it, because they can sell them courses or take brokerage fees from them. Still, when there are millions around you telling you that it works, even a non-conformist like me would budge. Not that many people actually participate in the markets, so I thought that by being in this minority made me smart and protected from fraudsters. Lol. All it took for me to discover the truth is to accidentally discover that some technical indicators give random results, do a few google searches, reach some scientific studies which are freely available and prove that technical and fundamental analysis don't work. It was always in front of me, but the fucking trading community plugged my ears and closed my eyes shut so I wasn't able to see it. Trading usually promises 3-15% gain a month.
A huge shock, but surely there was still a way for me to work this out? Active investing it is!
The next level, active investing, is different from trading. You aim for 15-50% yearly returns, but you don't have to do as much work. You hold on to stocks of your choice for years at a time, once in a while you study the markets, re balance your portfolio, etc. Or you invest your money in a fund, that will select the stocks of their choice and manage their and your portfolio for you. For a small fee of course. All of these actions are aimed at trying to outperform the gain the market made as a whole, and so called index funds, which invest in basically everything and follow the market returns - about 7-10% a year. And if I ever had any doubts in trading, I firmly believed that active investing works since I was a little kid (yes I knew about it back then). And this is where the real fraud comes in.
The whole Wall Street and every broker, every stock exchange in the world are a part of a big fraud. Only about 10-20% of professional fund managers outperform the market in any 15 year period. If you take 30 years, this dwindles to almost nothing, which means that no one can predict the markets. These people have no idea what they are doing. Jim Cramer is pure show-business and has no idea what's going on. Warren Buffet gained his fortune with pure luck, and for every Buffet there are some people who made only a million bucks and countless folks who lost everything.
Wall Street. They have trillions of dollars and use all that money and power and marketing to convince you that there is a way to predict where the stocks are going without being a legal insider or somehow abusing the law. They will make you think you can somehow learn from them where to invest your money on your own or they will make you believe that you should just give it to them and they will manage it for you, because they know how everything works and they can predict the future using past data.
They won't. They don't. They can't. There are studies and statistics to prove it countless times over the span of a 100 years. But they will still charge you exchange fees, brokerage fees and management fees anyway. And they also manipulate certain studies, lobby where and when they need it, and spread misinformation on an unprecedented scale, creating a positive image of themselves. And everyone falls for that. Billions of people around the globe still think it's all legit.
Passive index investing is the last level. You just put your money in the market and wait. Markets will go up at a predetermined rate. If there's a crisis, in 10 years no one will even remember. Markets always go up in the end. But passive index investing can only give you only 7% inflation-adjusted returns a year. Not enough to stop working or even retire early, unless you have a high-paying job in a first-world country. I don't.
Despite all that, to put it simply, this is the only type of investing that works and doesn't involve any kind of fraud or gambling. It's the type of investing that will give you the most money. If you want to know why it is like that and how to do it, just go to financialindependence. They know this stuff better than any other sub. Better than investing, trading or any other sub where non-passive-index investing is still discussed as viable strategy.
Back to me.
My whole being was fucked over, my hopes and dreams and understanding of success and how this world works were shattered. I realized, I had no future in financial industry, because only middlemen make money in there, and I quit college needed to get there. Frankly, I wouldn't want to work there even if I had the opportunity. The pay is good, but the job is boring and I wouldn't want to be a part of this giant scheme anyway. But even if I wanted to go back, I also couldn't. Russia is in a worsening crisis and my parents could no longer afford a US university and now with coronavirus it's even worse. Good thing I quit before it all happened. I learned a valuable lesson and didn't lose that much money for it (only about 10% of my savings). God knows where it would lead me if I continued to be delusional. But now that my last temporary plans for the future were scrapped, I had no idea what to do next.
The future.
With the reality hitting me, I would lie if I say it didn't all come full circle and connect to my past. I realized that I was stupid and not intelligent, because I was living in a made-up world for years now. But even if I were intelligent, pure wit would not give me the success and fortune that I was craving, because trading and active investing were a no-go for me, and business/politics require a very different, extroverted mindset, different education and interest from my own. My only redeeming quality in a hopeless introvert world, my perceived intelligence was taken away from me and rendered useless at the same time.
Besides, failing at that one thing made me insecure about everything and now I think of myself as an average individual. So, if 8 out of 10 businesses fail, I shouldn't start one because I will probably fail. And if most politicians don't get anywhere, why should I bother? If average salary in my country is X, I shouldn't hope for more. I stopped believing in my ability to achieve something. First, I failed at education and now I failed... Professionally? I don't know how to describe it, but my life recently was just an emotional roller coaster. I just feel like a very old person and all I want calmness and stability in my life. I was very lazy before just because, but now I feel like I also don't want to do anything because I feel I would just fail. It feels better now I don't have to worry about trading anymore and I got rid of that load... But I am still miserable and perhaps worse than ever, maybe I just don't understand and feel it because I've become slow and numb. The only positive thing that happened to me recently, is that I finally started losing weight and about 1/4 of the way back to my normal weight.
As for my future, am looking at several possibilities here. So far the parents are allowing my miserable life to continue and they let me live with them and buy me food. I don't need anything else right now. But it can't go on like this forever. The thought of having a mundane low-paying job in this shithole of a country depresses me. I will probably temporarily do English tutoring if there's demand for such work. My old school friends want me to help them in their business and my dad wants me to help him in his, I and probably should, but I feel useless, pathetic and incapable of doing anything of value. And business just seems boring, difficult and too stressful for me right now. Just not my cup of tea.
I am also looking at creative work. I love video games, music, films and other forms of art. I love the games most though, so I am looking into game dev. I don't really like programming, I have learned some during school years, but the pay would probably be higher for a programmer than an creator of any kind of art. However, I think I would enjoy art creation much more, but I don't have any experience in drawing and only some limited experience in music production. And I am not one of these kids who always had a scrapbook with them at school. Having to make another life choice paralyzes me. I am leaning towards art. I don't feel confident in my ability to learn this skill from scratch, but I think it's my best shot at finding a job that would make me happy.
So perhaps, when this whole pandemic is over, I'll go to Europe and get my degree, get a job there and stay. American Dream is dead to me, and Europe is cheaper, closer, safe and comfortable. Just the thing for a person who feels like they are thrice their real age.
Outro.
Thanks for coming to my TED Talk. Special thanks if you read the whole thing, it means a whole lot to me, an internet stranger. But even if no one reads it, feels good to get this off my chest. I actually cried during writing some parts. Holy shit, this might be the longest and smartest looking thing my dumbed down head could manage to generate since college. I hope that you're having a great day. Stay healthy and be careful during this fucking pandemic. All the best.
submitted by OberV0lt to TrueOffMyChest [link] [comments]

Beginner’s Guide to BitMEX

Beginner’s Guide to BitMEX

https://preview.redd.it/fl5e0q7i3cc41.jpg?width=1024&format=pjpg&auto=webp&s=445485d722839a9adc1ae13db4c965b0ae3e67b7
Founded by HDR Global Trading Limited (which in turn was founded by former bankers Arthur Hayes, Samuel Reed and Ben Delo) in 2014, BitMEX is a trading platform operating around the world and registered in the Seychelles.
Meaning Bitcoin Mercantile Exchange, BitMEX is one of the largest Bitcoin trading platforms currently operating, with a daily trading volume of over 35,000 BTC and over 540,000 accesses monthly and a trading history of over $34 billion worth of Bitcoin since its inception.

https://preview.redd.it/coenpm4k3cc41.jpg?width=808&format=pjpg&auto=webp&s=8832dcafa5bd615b511bbeb6118ef43d73ed785e
Unlike many other trading exchanges, BitMEX only accepts deposits through Bitcoin, which can then be used to purchase a variety of other cryptocurrencies. BitMEX specialises in sophisticated financial operations such as margin trading, which is trading with leverage. Like many of the exchanges that operate through cryptocurrencies, BitMEX is currently unregulated in any jurisdiction.
Visit BitMEX

How to Sign Up to BitMEX

In order to create an account on BitMEX, users first have to register with the website. Registration only requires an email address, the email address must be a genuine address as users will receive an email to confirm registration in order to verify the account. Once users are registered, there are no trading limits. Traders must be at least 18 years of age to sign up.
https://preview.redd.it/0v13qoil3cc41.jpg?width=808&format=pjpg&auto=webp&s=e6134bc089c4e352dce10d754dc84ff11a4c7994
However, it should be noted that BitMEX does not accept any US-based traders and will use IP checks to verify that users are not in the US. While some US users have bypassed this with the use of a VPN, it is not recommended that US individuals sign up to the BitMEX service, especially given the fact that alternative exchanges are available to service US customers that function within the US legal framework.
How to Use BitMEX
BitMEX allows users to trade cryptocurrencies against a number of fiat currencies, namely the US Dollar, the Japanese Yen and the Chinese Yuan. BitMEX allows users to trade a number of different cryptocurrencies, namely Bitcoin, Bitcoin Cash, Dash, Ethereum, Ethereum Classic, Litecoin, Monero, Ripple, Tezos and Zcash.
The trading platform on BitMEX is very intuitive and easy to use for those familiar with similar markets. However, it is not for the beginner. The interface does look a little dated when compared to newer exchanges like Binance and Kucoin’s.
Once users have signed up to the platform, they should click on Trade, and all the trading instruments will be displayed beneath.
Clicking on the particular instrument opens the orderbook, recent trades, and the order slip on the left. The order book shows three columns – the bid value for the underlying asset, the quantity of the order, and the total USD value of all orders, both short and long.
The widgets on the trading platform can be changed according to the user’s viewing preferences, allowing users to have full control on what is displayed. It also has a built in feature that provides for TradingView charting. This offers a wide range of charting tool and is considered to be an improvement on many of the offering available from many of its competitors.
https://preview.redd.it/fabg1nxo3cc41.jpg?width=808&format=pjpg&auto=webp&s=6d939889c3eac15ab1e78ec37a8ccd13fc5e0573
Once trades are made, all orders can be easily viewed in the trading platform interface. There are tabs where users can select their Active Orders, see the Stops that are in place, check the Orders Filled (total or partially) and the trade history. On the Active Orders and Stops tabs, traders can cancel any order, by clicking the “Cancel” button. Users also see all currently open positions, with an analysis if it is in the black or red.
BitMEX uses a method called auto-deleveraging which BitMEX uses to ensure that liquidated positions are able to be closed even in a volatile market. Auto-deleveraging means that if a position bankrupts without available liquidity, the positive side of the position deleverages, in order of profitability and leverage, the highest leveraged position first in queue. Traders are always shown where they sit in the auto-deleveraging queue, if such is needed.
Although the BitMEX platform is optimized for mobile, it only has an Android app (which is not official). There is no iOS app available at present. However, it is recommended that users use it on the desktop if possible.
BitMEX offers a variety of order types for users:
  • Limit Order (the order is fulfilled if the given price is achieved);
  • Market Order (the order is executed at current market price);
  • Stop Limit Order (like a stop order, but allows users to set the price of the Order once the Stop Price is triggered);
  • Stop Market Order (this is a stop order that does not enter the order book, remain unseen until the market reaches the trigger);
  • Trailing Stop Order (it is similar to a Stop Market order, but here users set a trailing value that is used to place the market order);
  • Take Profit Limit Order (this can be used, similarly to a Stop Order, to set a target price on a position. In this case, it is in respect of making gains, rather than cutting losses);
  • Take Profit Market Order (same as the previous type, but in this case, the order triggered will be a market order, and not a limit one)
The exchange offers margin trading in all of the cryptocurrencies displayed on the website. It also offers to trade with futures and derivatives – swaps.

Futures and Swaps

A futures contract is an agreement to buy or sell a given asset in the future at a predetermined price. On BitMEX, users can leverage up to 100x on certain contracts.
Perpetual swaps are similar to futures, except that there is no expiry date for them and no settlement. Additionally, they trade close to the underlying reference Index Price, unlike futures, which may diverge substantially from the Index Price.
BitMEX also offers Binary series contracts, which are prediction-based contracts which can only settle at either 0 or 100. In essence, the Binary series contracts are a more complicated way of making a bet on a given event.
The only Binary series betting instrument currently available is related to the next 1mb block on the Bitcoin blockchain. Binary series contracts are traded with no leverage, a 0% maker fee, a 0.25% taker fee and 0.25% settlement fee.

Bitmex Leverage

BitMEX allows its traders to leverage their position on the platform. Leverage is the ability to place orders that are bigger than the users’ existing balance. This could lead to a higher profit in comparison when placing an order with only the wallet balance. Trading in such conditions is called “Margin Trading.”
There are two types of Margin Trading: Isolated and Cross-Margin. The former allows the user to select the amount of money in their wallet that should be used to hold their position after an order is placed. However, the latter provides that all of the money in the users’ wallet can be used to hold their position, and therefore should be treated with extreme caution.
https://preview.redd.it/eg4qk9qr3cc41.jpg?width=808&format=pjpg&auto=webp&s=c3ca8cdf654330ce53e8138d774e72155acf0e7e
The BitMEX platform allows users to set their leverage level by using the leverage slider. A maximum leverage of 1:100 is available (on Bitcoin and Bitcoin Cash). This is quite a high level of leverage for cryptocurrencies, with the average offered by other exchanges rarely exceeding 1:20.

BitMEX Fees

For traditional futures trading, BitMEX has a straightforward fee schedule. As noted, in terms of leverage offered, BitMEX offers up to 100% leverage, with the amount off leverage varying from product to product.
However, it should be noted that trading at the highest leverages is sophisticated and is intended for professional investors that are familiar with speculative trading. The fees and leverage are as follows:
https://preview.redd.it/wvhiepht3cc41.jpg?width=730&format=pjpg&auto=webp&s=0617eb894c13d3870211a01d51af98561907cb99

https://preview.redd.it/qhi8izcu3cc41.jpg?width=730&format=pjpg&auto=webp&s=09da4efe1de4214b0b5b9c7501aba5320e846b4c
However, there are additional fees for hidden / iceberg orders. A hidden order pays the taker fee until the entire hidden quantity is completely executed. Then, the order will become normal, and the user will receive the maker rebate for the non-hidden amount.

Deposits and Withdrawals

BitMEX does not charge fees on deposits or withdrawals. However, when withdrawing Bitcoin, the minimum Network fee is based on blockchain load. The only costs therefore are those of the banks or the cryptocurrency networks.
As noted previously, BitMEX only accepts deposits in Bitcoin and therefore Bitcoin serves as collateral on trading contracts, regardless of whether or not the trade involves Bitcoin.
The minimum deposit is 0.001 BTC. There are no limits on withdrawals, but withdrawals can also be in Bitcoin only. To make a withdrawal, all that users need to do is insert the amount to withdraw and the wallet address to complete the transfer.
https://preview.redd.it/xj1kbuew3cc41.jpg?width=808&format=pjpg&auto=webp&s=68056f2247001c63e89c880cfbb75b2f3616e8fe
Deposits can be made 24/7 but withdrawals are processed by hand at a recurring time once per day. The hand processed withdrawals are intended to increase the security levels of users’ funds by providing extra time (and email notice) to cancel any fraudulent withdrawal requests, as well as bypassing the use of automated systems & hot wallets which may be more prone to compromise.

Supported Currencies

BitMEX operates as a crypto to crypto exchange and makes use of a Bitcoin-in/Bitcoin-out structure. Therefore, platform users are currently unable to use fiat currencies for any payments or transfers, however, a plus side of this is that there are no limits for trading and the exchange incorporates trading pairs linked to the US Dollar (XBT), Japanese Yen (XBJ), and Chinese Yuan (XBC).
BitMEX supports the following cryptocurrencies:
  • Bitcoin (XBT)
  • Bitcoin Cash (BCH)
  • Ethereum (ETH)
  • Ethereum Classic (ETC)
  • Litecoin (LTC)
  • Ripple Token (XRP)
  • Monero (XMR)
  • Dash (DASH)
  • Zcash (ZEC)
  • Cardano (ADA)
  • Tron (TRX)
  • EOS Token (EOS)
BitMEX also offers leverage options on the following coins:
  • 5x: Zcash (ZEC)
  • 20x : Ripple (XRP),Bitcoin Cash (BCH), Cardano (ADA), EOS Token (EOS), Tron (TRX)
  • 25x: Monero (XMR)
  • 33x: Litecoin (LTC)
  • 50x: Ethereum (ETH)
  • 100x: Bitcoin (XBT), Bitcoin / Yen (XBJ), Bitcoin / Yuan (XBC)

Trading Technologies International Partnership

HDR Global Trading, the company which owns BitMEX, has recently announced a partnership with Trading Technologies International, Inc. (TT), a leading international high-performance trading software provider.
The TT platform is designed specifically for professional traders, brokers, and market-access providers, and incorporates a wide variety of trading tools and analytical indicators that allow even the most advanced traders to customize the software to suit their unique trading styles. The TT platform also provides traders with global market access and trade execution through its privately managed infrastructure and the partnership will see BitMEX users gaining access to the trading tools on all BitMEX products, including the popular XBT/USD Perpetual Swap pairing.
https://preview.redd.it/qcqunaby3cc41.png?width=672&format=png&auto=webp&s=b77b45ac2b44a9af30a4985e3d9dbafc9bbdb77c

The BitMEX Insurance Fund

The ability to trade on leverage is one of the exchange’s main selling points and offering leverage and providing the opportunity for traders to trade against each other may result in a situation where the winners do not receive all of their expected profits. As a result of the amounts of leverage involved, it’s possible that the losers may not have enough margin in their positions to pay the winners.
Traditional exchanges like the Chicago Mercantile Exchange (CME) offset this problem by utilizing multiple layers of protection and cryptocurrency trading platforms offering leverage cannot currently match the levels of protection provided to winning traders.
In addition, cryptocurrency exchanges offering leveraged trades propose a capped downside and unlimited upside on a highly volatile asset with the caveat being that on occasion, there may not be enough funds in the system to pay out the winners.
To help solve this problem, BitMEX has developed an insurance fund system, and when a trader has an open leveraged position, their position is forcefully closed or liquidated when their maintenance margin is too low.
Here, a trader’s profit and loss does not reflect the actual price their position was closed on the market, and with BitMEX when a trader is liquidated, their equity associated with the position drops down to zero.
In the following example, the trader has taken a 100x long position. In the event that the mark price of Bitcoin falls to $3,980 (by 0.5%), then the position gets liquidated with the 100 Bitcoin position needing to be sold on the market.
This means that it does not matter what price this trade executes at, namely if it’s $3,995 or $3,000, as from the view of the liquidated trader, regardless of the price, they lose all the equity they had in their position, and lose the entire one Bitcoin.
https://preview.redd.it/wel3rka04cc41.png?width=669&format=png&auto=webp&s=3f93dac2d3b40aa842d281384113d2e26f25947e
Assuming there is a fully liquid market, the bid/ask spread should be tighter than the maintenance margin. Here, liquidations manifest as contributions to the insurance fund (e.g. if the maintenance margin is 50bps, but the market is 1bp wide), and the insurance fund should rise by close to the same amount as the maintenance margin when a position is liquidated. In this scenario, as long as healthy liquid markets persist, the insurance fund should continue its steady growth.
The following graphs further illustrate the example, and in the first chart, market conditions are healthy with a narrow bid/ask spread (just $2) at the time of liquidation. Here, the closing trade occurs at a higher price than the bankruptcy price (the price where the margin balance is zero) and the insurance fund benefits.
Illustrative example of an insurance contribution – Long 100x with 1 BTC collateral
https://preview.redd.it/is89ep924cc41.png?width=699&format=png&auto=webp&s=f0419c68fe88703e594c121b5b742c963c7e2229
(Note: The above illustration is based on opening a 100x long position at $4,000 per BTC and 1 Bitcoin of collateral. The illustration is an oversimplification and ignores factors such as fees and other adjustments.
The bid and offer prices represent the state of the order book at the time of liquidation. The closing trade price is $3,978, representing $1 of slippage compared to the $3,979 bid price at the time of liquidation.)
The second chart shows a wide bid/ask spread at the time of liquidation, here, the closing trade takes place at a lower price than the bankruptcy price, and the insurance fund is used to make sure that winning traders receive their expected profits.
This works to stabilize the potential for returns as there is no guarantee that healthy market conditions can continue, especially during periods of heightened price volatility. During these periods, it’s actually possible that the insurance fund can be used up than it is built up.
Illustrative example of an insurance depletion – Long 100x with 1 BTC collateral
https://preview.redd.it/vb4mj3n54cc41.png?width=707&format=png&auto=webp&s=0c63b7c99ae1c114d8e3b947fb490e9144dfe61b
(Notes: The above illustration is based on opening a 100x long position at $4,000 per BTC and 1 Bitcoin of collateral. The illustration is an oversimplification and ignores factors such as fees and other adjustments.
The bid and offer prices represent the state of the order book at the time of liquidation. The closing trade price is $3,800, representing $20 of slippage compared to the $3,820 bid price at the time of liquidation.)
The exchange declared in February 2019, that the BitMEX insurance fund retained close to 21,000 Bitcoin (around $70 million based on Bitcoin spot prices at the time).
This figure represents just 0.007% of BitMEX’s notional annual trading volume, which has been quoted as being approximately $1 trillion. This is higher than the insurance funds as a proportion of trading volume of the CME, and therefore, winning traders on BitMEX are exposed to much larger risks than CME traders as:
  • BitMEX does not have clearing members with large balance sheets and traders are directly exposed to each other.
  • BitMEX does not demand payments from traders with negative account balances.
  • The underlying instruments on BitMEX are more volatile than the more traditional instruments available on CME.
Therefore, with the insurance fund remaining capitalized, the system effectively with participants who get liquidated paying for liquidations, or a losers pay for losers mechanism.
This system may appear controversial as first, though some may argue that there is a degree of uniformity to it. It’s also worth noting that the exchange also makes use of Auto Deleveraging which means that on occasion, leveraged positions in profit can still be reduced during certain time periods if a liquidated order cannot be executed in the market.
More adventurous traders should note that while the insurance fund holds 21,000 Bitcoin, worth approximately 0.1% of the total Bitcoin supply, BitMEX still doesn’t offer the same level of guarantees to winning traders that are provided by more traditional leveraged trading platforms.
Given the inherent volatility of the cryptocurrency market, there remains some possibility that the fund gets drained down to zero despite its current size. This may result in more successful traders lacking confidence in the platform and choosing to limit their exposure in the event of BitMEX being unable to compensate winning traders.

How suitable is BitMEX for Beginners?

BitMEX generates high Bitcoin trading levels, and also attracts good levels of volume across other crypto-to-crypto transfers. This helps to maintain a buzz around the exchange, and BitMEX also employs relatively low trading fees, and is available round the world (except to US inhabitants).
This helps to attract the attention of people new to the process of trading on leverage and when getting started on the platform there are 5 main navigation Tabs to get used to:
  • **Trade:**The trading dashboard of BitMEX. This tab allows you to select your preferred trading instrument, and choose leverage, as well as place and cancel orders. You can also see your position information and view key information in the contract details.
  • **Account:**Here, all your account information is displayed including available Bitcoin margin balances, deposits and withdrawals, and trade history.
  • **Contracts:**This tab covers further instrument information including funding history, contract sizes; leverage offered expiry, underlying reference Price Index data, and other key features.
  • **References:**This resource centre allows you to learn about futures, perpetual contracts, position marking, and liquidation.
  • **API:**From here you can set up an API connection with BitMEX, and utilize the REST API and WebSocket API.
BitMEX also employs 24/7 customer support and the team can also be contacted on their Twitter and Reddit accounts.
In addition, BitMEX provides a variety of educational resources including an FAQ section, Futures guides, Perpetual Contracts guides, and further resources in the “References” account tab.
For users looking for more in depth analysis, the BitMEX blog produces high level descriptions of a number of subjects and has garnered a good reputation among the cryptocurrency community.
Most importantly, the exchange also maintains a testnet platform, built on top of testnet Bitcoin, which allows anyone to try out programs and strategies before moving on to the live exchange.
This is crucial as despite the wealth of resources available, BitMEX is not really suitable for beginners, and margin trading, futures contracts and swaps are best left to experienced, professional or institutional traders.
Margin trading and choosing to engage in leveraged activity are risky processes and even more advanced traders can describe the process as a high risk and high reward “game”. New entrants to the sector should spend a considerable amount of time learning about margin trading and testing out strategies before considering whether to open a live account.

Is BitMEX Safe?

BitMEX is widely considered to have strong levels of security. The platform uses multi-signature deposits and withdrawal schemes which can only be used by BitMEX partners. BitMEX also utilises Amazon Web Services to protect the servers with text messages and two-factor authentication, as well as hardware tokens.
BitMEX also has a system for risk checks, which requires that the sum of all account holdings on the website must be zero. If it’s not, all trading is immediately halted. As noted previously, withdrawals are all individually hand-checked by employees, and private keys are never stored in the cloud. Deposit addresses are externally verified to make sure that they contain matching keys. If they do not, there is an immediate system shutdown.
https://preview.redd.it/t04qs3484cc41.jpg?width=808&format=pjpg&auto=webp&s=a3b106cbc9116713dcdd5e908c00b555fd704ee6
In addition, the BitMEX trading platform is written in kdb+, a database and toolset popular amongst major banks in high frequency trading applications. The BitMEX engine appears to be faster and more reliable than some of its competitors, such as Poloniex and Bittrex.
They have email notifications, and PGP encryption is used for all communication.
The exchange hasn’t been hacked in the past.

How Secure is the platform?

As previously mentioned, BitMEX is considered to be a safe exchange and incorporates a number of security protocols that are becoming standard among the sector’s leading exchanges. In addition to making use of Amazon Web Services’ cloud security, all the exchange’s systems can only be accessed after passing through multiple forms of authentication, and individual systems are only able to communicate with each other across approved and monitored channels.
Communication is also further secured as the exchange provides optional PGP encryption for all automated emails, and users can insert their PGP public key into the form inside their accounts.
Once set up, BitMEX will encrypt and sign all the automated emails sent by you or to your account by the [[email protected]](mailto:[email protected]) email address. Users can also initiate secure conversations with the support team by using the email address and public key on the Technical Contact, and the team have made their automated system’s PGP key available for verification in their Security Section.
The platform’s trading engine is written in kdb+, a database and toolset used by leading financial institutions in high-frequency trading applications, and the speed and reliability of the engine is also used to perform a full risk check after every order placement, trade, settlement, deposit, and withdrawal.
All accounts in the system must consistently sum to zero, and if this does not happen then trading on the platform is immediately halted for all users.
With regards to wallet security, BitMEX makes use of a multisignature deposit and withdrawal scheme, and all exchange addresses are multisignature by default with all storage being kept offline. Private keys are not stored on any cloud servers and deep cold storage is used for the majority of funds.
Furthermore, all deposit addresses sent by the BitMEX system are verified by an external service that works to ensure that they contain the keys controlled by the founders, and in the event that the public keys differ, the system is immediately shut down and trading halted. The exchange’s security practices also see that every withdrawal is audited by hand by a minimum of two employees before being sent out.

BitMEX Customer Support

The trading platform has a 24/7 support on multiple channels, including email, ticket systems and social media. The typical response time from the customer support team is about one hour, and feedback on the customer support generally suggest that the customer service responses are helpful and are not restricted to automated responses.
https://preview.redd.it/8k81zl0a4cc41.jpg?width=808&format=pjpg&auto=webp&s=e30e5b7ca93d2931f49e2dc84025f2fda386eab1
The BitMEX also offers a knowledge base and FAQs which, although they are not necessarily always helpful, may assist and direct users towards the necessary channels to obtain assistance.
BitMEX also offers trading guides which can be accessed here

Conclusion

There would appear to be few complaints online about BitMEX, with most issues relating to technical matters or about the complexities of using the website. Older complaints also appeared to include issues relating to low liquidity, but this no longer appears to be an issue.
BitMEX is clearly not a platform that is not intended for the amateur investor. The interface is complex and therefore it can be very difficult for users to get used to the platform and to even navigate the website.
However, the platform does provide a wide range of tools and once users have experience of the platform they will appreciate the wide range of information that the platform provides.
Visit BitMEX
submitted by bitmex_register to u/bitmex_register [link] [comments]

Vault 7 - CIA Hacking Tools Revealed

Vault 7 - CIA Hacking Tools Revealed
March 07, 2017
from Wikileaks Website


https://preview.redd.it/9ufj63xnfdb41.jpg?width=500&format=pjpg&auto=webp&s=46bbc937f4f060bad1eaac3e0dce732e3d8346ee

Press Release
Today, Tuesday 7 March 2017, WikiLeaks begins its new series of leaks on the U.S. Central Intelligence Agency.
Code-named "Vault 7" by WikiLeaks, it is the largest ever publication of confidential documents on the agency.
The first full part of the series, "Year Zero", comprises 8,761 documents and files from an isolated, high-security network situated inside the CIA's Center for Cyber Intelligence (below image) in Langley, Virgina.
It follows an introductory disclosure last month of CIA targeting French political parties and candidates in the lead up to the 2012 presidential election.
Recently, the CIA lost control of the majority of its hacking arsenal including,
  1. malware
  2. viruses
  3. trojans
  4. weaponized "zero day" exploits
  5. malware remote control systems

...and associated documentation.
This extraordinary collection, which amounts to more than several hundred million lines of code, gives its possessor the entire hacking capacity of the CIA.
The archive appears to have been circulated among former U.S. government hackers and contractors in an unauthorized manner, one of whom has provided WikiLeaks with portions of the archive.
"Year Zero" introduces the scope and direction of the CIA's global covert hacking program, its malware arsenal and dozens of "zero day" weaponized exploits against a wide range of U.S. and European company products, include,

  1. Apple's iPhone
  2. Google's Android
  3. Microsoft's Windows
  4. Samsung TVs,

...which are turned into covert microphones.
Since 2001 the CIA has gained political and budgetary preeminence over the U.S. National Security Agency (NSA).
The CIA found itself building not just its now infamous drone fleet, but a very different type of covert, globe-spanning force - its own substantial fleet of hackers.
The agency's hacking division freed it from having to disclose its often controversial operations to the NSA (its primary bureaucratic rival) in order to draw on the NSA's hacking capacities.
By the end of 2016, the CIA's hacking division, which formally falls under the agency's Center for Cyber Intelligence (CCI - below image), had over 5000 registered users and had produced more than a thousand,
hacking systems trojans viruses,
...and other "weaponized" malware.


https://preview.redd.it/3jsojkqxfdb41.jpg?width=366&format=pjpg&auto=webp&s=e92eafbb113ab3e972045cc242dde0f0dd511e96

Such is the scale of the CIA's undertaking that by 2016, its hackers had utilized more codes than those used to run Facebook.
The CIA had created, in effect, its "own NSA" with even less accountability and without publicly answering the question as to whether such a massive budgetary spend on duplicating the capacities of a rival agency could be justified.
In a statement to WikiLeaks the source details policy questions that they say urgently need to be debated in public, including whether the CIA's hacking capabilities exceed its mandated powers and the problem of public oversight of the agency.
The source wishes to initiate a public debate about the security, creation, use, proliferation and democratic control of cyberweapons.
Once a single cyber 'weapon' is 'loose' it can spread around the world in seconds, to be used by rival states, cyber mafia and teenage hackers alike.

Julian Assange, WikiLeaks editor stated that,
"There is an extreme proliferation risk in the development of cyber 'weapons'.
Comparisons can be drawn between the uncontrolled proliferation of such 'weapons', which results from the inability to contain them combined with their high market value, and the global arms trade.
But the significance of 'Year Zero' goes well beyond the choice between cyberwar and cyberpeace. The disclosure is also exceptional from a political, legal and forensic perspective."

Wikileaks has carefully reviewed the "Year Zero" disclosure and published substantive CIA documentation while avoiding the distribution of 'armed' cyberweapons until a consensus emerges on the technical and political nature of the CIA's program and how such 'weapons' should analyzed, disarmed and published.

Wikileaks has also decided to Redact (see far below) and Anonymize some identifying information in "Year Zero" for in depth analysis. These redactions include ten of thousands of CIA targets and attack machines throughout,
Latin America Europe the United States

While we are aware of the imperfect results of any approach chosen, we remain committed to our publishing model and note that the quantity of published pages in "Vault 7" part one ("Year Zero") already eclipses the total number of pages published over the first three years of the Edward Snowden NSA leaks.

Analysis

CIA malware targets iPhone, Android, smart TVs
CIA malware and hacking tools are built by EDG (Engineering Development Group), a software development group within CCI (Center for Cyber Intelligence), a department belonging to the CIA's DDI (Directorate for Digital Innovation).
The DDI is one of the five major directorates of the CIA (see above image of the CIA for more details).
The EDG is responsible for the development, testing and operational support of all backdoors, exploits, malicious payloads, trojans, viruses and any other kind of malware used by the CIA in its covert operations world-wide.
The increasing sophistication of surveillance techniques has drawn comparisons with George Orwell's 1984, but "Weeping Angel", developed by the CIA's Embedded Devices Branch (EDB), which infests smart TVs, transforming them into covert microphones, is surely its most emblematic realization.
The attack against Samsung smart TVs was developed in cooperation with the United Kingdom's MI5/BTSS.
After infestation, Weeping Angel places the target TV in a 'Fake-Off' mode, so that the owner falsely believes the TV is off when it is on. In 'Fake-Off' mode the TV operates as a bug, recording conversations in the room and sending them over the Internet to a covert CIA server.
As of October 2014 the CIA was also looking at infecting the vehicle control systems used by modern cars and trucks. The purpose of such control is not specified, but it would permit the CIA to engage in nearly undetectable assassinations.
The CIA's Mobile Devices Branch (MDB) developed numerous attacks to remotely hack and control popular smart phones. Infected phones can be instructed to send the CIA the user's geolocation, audio and text communications as well as covertly activate the phone's camera and microphone.
Despite iPhone's minority share (14.5%) of the global smart phone market in 2016, a specialized unit in the CIA's Mobile Development Branch produces malware to infest, control and exfiltrate data from iPhones and other Apple products running iOS, such as iPads.
CIA's arsenal includes numerous local and remote "zero days" developed by CIA or obtained from GCHQ, NSA, FBI or purchased from cyber arms contractors such as Baitshop.
The disproportionate focus on iOS may be explained by the popularity of the iPhone among social, political, diplomatic and business elites.
A similar unit targets Google's Android which is used to run the majority of the world's smart phones (~85%) including Samsung, HTC and Sony. 1.15 billion Android powered phones were sold last year.
"Year Zero" shows that as of 2016 the CIA had 24 "weaponized" Android "zero days" which it has developed itself and obtained from GCHQ, NSA and cyber arms contractors.
These techniques permit the CIA to bypass the encryption of, WhatsApp
  1. Signal
  2. Telegram
  3. Wiebo
  4. Confide
  5. Cloackman
...by hacking the "smart" phones that they run on and collecting audio and message traffic before encryption is applied.
CIA malware targets Windows, OSx, Linux, routers
The CIA also runs a very substantial effort to infect and control Microsoft Windows users with its malware.
This includes multiple local and remote weaponized "zero days", air gap jumping viruses such as "Hammer Drill" which infects software distributed on CD/DVDs, infectors for removable media such as USBs, systems to hide data in images or in covert disk areas ("Brutal Kangaroo") and to keep its malware infestations going.
Many of these infection efforts are pulled together by the CIA's Automated Implant Branch (AIB), which has developed several attack systems for automated infestation and control of CIA malware, such as "Assassin" and "Medusa".
Attacks against Internet infrastructure and webservers are developed by the CIA's Network Devices Branch (NDB).
The CIA has developed automated multi-platform malware attack and control systems covering Windows, Mac OS X, Solaris, Linux and more, such as EDB's "HIVE" and the related "Cutthroat" and "Swindle" tools, which are described in the examples section far below.
CIA 'hoarded' vulnerabilities ("zero days")
In the wake of Edward Snowden's leaks about the NSA, the U.S. technology industry secured a commitment from the Obama administration that the executive would disclose on an ongoing basis - rather than hoard - serious vulnerabilities, exploits, bugs or "zero days" to Apple, Google, Microsoft, and other US-based manufacturers.
Serious vulnerabilities not disclosed to the manufacturers places huge swathes of the population and critical infrastructure at risk to foreign intelligence or cyber criminals who independently discover or hear rumors of the vulnerability.
If the CIA can discover such vulnerabilities so can others.
The U.S. government's commitment to the Vulnerabilities Equities Process came after significant lobbying by US technology companies, who risk losing their share of the global market over real and perceived hidden vulnerabilities.
The government stated that it would disclose all pervasive vulnerabilities discovered after 2010 on an ongoing basis.
"Year Zero" documents show that the CIA breached the Obama administration's commitments. Many of the vulnerabilities used in the CIA's cyber arsenal are pervasive and some may already have been found by rival intelligence agencies or cyber criminals.
As an example, specific CIA malware revealed in "Year Zero" is able to penetrate, infest and control both the Android phone and iPhone software that runs or has run presidential Twitter accounts.
The CIA attacks this software by using undisclosed security vulnerabilities ("zero days") possessed by the CIA but if the CIA can hack these phones then so can everyone else who has obtained or discovered the vulnerability.
As long as the CIA keeps these vulnerabilities concealed from Apple and Google (who make the phones) they will not be fixed, and the phones will remain hackable.
The same vulnerabilities exist for the population at large, including the U.S. Cabinet, Congress, top CEOs, system administrators, security officers and engineers.
By hiding these security flaws from manufacturers like Apple and Google the CIA ensures that it can hack everyone at the expense of leaving everyone hackable.
'Cyberwar' programs are a serious proliferation risk
Cyber 'weapons' are not possible to keep under effective control.
While nuclear proliferation has been restrained by the enormous costs and visible infrastructure involved in assembling enough fissile material to produce a critical nuclear mass, cyber 'weapons', once developed, are very hard to retain.
Cyber 'weapons' are in fact just computer programs which can be pirated like any other. Since they are entirely comprised of information they can be copied quickly with no marginal cost.
Securing such 'weapons' is particularly difficult since the same people who develop and use them have the skills to exfiltrate copies without leaving traces - sometimes by using the very same 'weapons' against the organizations that contain them.
There are substantial price incentives for government hackers and consultants to obtain copies since there is a global "vulnerability market" that will pay hundreds of thousands to millions of dollars for copies of such 'weapons'.
Similarly, contractors and companies who obtain such 'weapons' sometimes use them for their own purposes, obtaining advantage over their competitors in selling 'hacking' services.
Over the last three years the United States intelligence sector, which consists of government agencies such as the CIA and NSA and their contractors, such as Booz Allan Hamilton, has been subject to unprecedented series of data exfiltrations by its own workers.
A number of intelligence community members not yet publicly named have been arrested or subject to federal criminal investigations in separate incidents.
Most visibly, on February 8, 2017 a U.S. federal grand jury indicted Harold T. Martin III with 20 counts of mishandling classified information.
The Department of Justice alleged that it seized some 50,000 gigabytes of information from Harold T. Martin III that he had obtained from classified programs at NSA and CIA, including the source code for numerous hacking tools.
Once a single cyber 'weapon' is 'loose' it can spread around the world in seconds, to be used by peer states, cyber mafia and teenage hackers alike.
U.S. Consulate in Frankfurt is a covert CIA hacker base
In addition to its operations in Langley, Virginia the CIA also uses the U.S. consulate in Frankfurt as a covert base for its hackers covering Europe, the Middle East and Africa.
CIA hackers operating out of the Frankfurt consulate ("Center for Cyber Intelligence Europe" or CCIE) are given diplomatic ("black") passports and State Department cover.
The instructions for incoming CIA hackers make Germany's counter-intelligence efforts appear inconsequential: "Breeze through German Customs because you have your cover-for-action story down pat, and all they did was stamp your passport" Your Cover Story (for this trip) Q: Why are you here? A: Supporting technical consultations at the Consulate. Two earlier WikiLeaks publications give further detail on CIA approaches to customs and secondary screening procedures.
Once in Frankfurt CIA hackers can travel without further border checks to the 25 European countries that are part of the Shengen open border area - including France, Italy and Switzerland.
A number of the CIA's electronic attack methods are designed for physical proximity.
These attack methods are able to penetrate high security networks that are disconnected from the internet, such as police record database. In these cases, a CIA officer, agent or allied intelligence officer acting under instructions, physically infiltrates the targeted workplace.
The attacker is provided with a USB containing malware developed for the CIA for this purpose, which is inserted into the targeted computer. The attacker then infects and exfiltrates data to removable media.
For example, the CIA attack system Fine Dining, provides 24 decoy applications for CIA spies to use.
To witnesses, the spy appears to be running a program showing videos (e.g VLC), presenting slides (Prezi), playing a computer game (Breakout2, 2048) or even running a fake virus scanner (Kaspersky, McAfee, Sophos).
But while the decoy application is on the screen, the underlying system is automatically infected and ransacked.
How the CIA dramatically increased proliferation risks
In what is surely one of the most astounding intelligence own goals in living memory, the CIA structured its classification regime such that for the most market valuable part of "Vault 7", the CIA's, weaponized malware (implants + zero days) Listening Posts (LP) Command and Control (C2) systems, ...the agency has little legal recourse.
The CIA made these systems unclassified.
Why the CIA chose to make its cyber-arsenal unclassified reveals how concepts developed for military use do not easily crossover to the 'battlefield' of cyber 'war'.
To attack its targets, the CIA usually requires that its implants communicate with their control programs over the internet.
If CIA implants, Command & Control and Listening Post software were classified, then CIA officers could be prosecuted or dismissed for violating rules that prohibit placing classified information onto the Internet.
Consequently the CIA has secretly made most of its cyber spying/war code unclassified. The U.S. government is not able to assert copyright either, due to restrictions in the U.S. Constitution.
This means that cyber 'arms' manufactures and computer hackers can freely "pirate" these 'weapons' if they are obtained. The CIA has primarily had to rely on obfuscation to protect its malware secrets.
Conventional weapons such as missiles may be fired at the enemy (i.e. into an unsecured area). Proximity to or impact with the target detonates the ordnance including its classified parts. Hence military personnel do not violate classification rules by firing ordnance with classified parts.
Ordnance will likely explode. If it does not, that is not the operator's intent.
Over the last decade U.S. hacking operations have been increasingly dressed up in military jargon to tap into Department of Defense funding streams.
For instance, attempted "malware injections" (commercial jargon) or "implant drops" (NSA jargon) are being called "fires" as if a weapon was being fired.
However the analogy is questionable.
Unlike bullets, bombs or missiles, most CIA malware is designed to live for days or even years after it has reached its 'target'. CIA malware does not "explode on impact" but rather permanently infests its target. In order to infect target's device, copies of the malware must be placed on the target's devices, giving physical possession of the malware to the target.
To exfiltrate data back to the CIA or to await further instructions the malware must communicate with CIA Command & Control (C2) systems placed on internet connected servers.
But such servers are typically not approved to hold classified information, so CIA command and control systems are also made unclassified.
A successful 'attack' on a target's computer system is more like a series of complex stock maneuvers in a hostile take-over bid or the careful planting of rumors in order to gain control over an organization's leadership rather than the firing of a weapons system.
If there is a military analogy to be made, the infestation of a target is perhaps akin to the execution of a whole series of military maneuvers against the target's territory including observation, infiltration, occupation and exploitation.
Evading forensics and anti-virus
A series of standards lay out CIA malware infestation patterns which are likely to assist forensic crime scene investigators as well as, Apple
  1. Microsoft
  2. Google
  3. Samsung
  4. Nokia
  5. Blackberry
  6. Siemens
  7. anti-virus companies,
...attribute and defend against attacks.
"Tradecraft DO's and DON'Ts" contains CIA rules on how its malware should be written to avoid fingerprints implicating the "CIA, US government, or its witting partner companies" in "forensic review".
Similar secret standards cover the, use of encryption to hide CIA hacker and malware communication (pdf) describing targets & exfiltrated data (pdf) executing payloads (pdf) persisting (pdf), ...in the target's machines over time.
CIA hackers developed successful attacks against most well known anti-virus programs.
These are documented in, AV defeats Personal Security Products Detecting and defeating PSPs PSP/DebuggeRE Avoidance For example, Comodo was defeated by CIA malware placing itself in the Window's "Recycle Bin". While Comodo 6.x has a "Gaping Hole of DOOM".
CIA hackers discussed what the NSA's "Equation Group" hackers did wrong and how the CIA's malware makers could avoid similar exposure.

Examples

The CIA's Engineering Development Group (EDG) management system contains around 500 different projects (only some of which are documented by "Year Zero") each with their own sub-projects, malware and hacker tools.
The majority of these projects relate to tools that are used for,
penetration infestation ("implanting") control exfiltration
Another branch of development focuses on the development and operation of Listening Posts (LP) and Command and Control (C2) systems used to communicate with and control CIA implants.
Special projects are used to target specific hardware from routers to smart TVs.
Some example projects are described below, but see the table of contents for the full list of projects described by WikiLeaks' "Year Zero".
UMBRAGE
The CIA's hand crafted hacking techniques pose a problem for the agency.
Each technique it has created forms a "fingerprint" that can be used by forensic investigators to attribute multiple different attacks to the same entity.
This is analogous to finding the same distinctive knife wound on multiple separate murder victims. The unique wounding style creates suspicion that a single murderer is responsible.
As soon one murder in the set is solved then the other murders also find likely attribution.
The CIA's Remote Devices Branch's UMBRAGE group collects and maintains a substantial library of attack techniques 'stolen' from malware produced in other states including the Russian Federation.
With UMBRAGE and related projects the CIA cannot only increase its total number of attack types but also misdirect attribution by leaving behind the "fingerprints" of the groups that the attack techniques were stolen from.
UMBRAGE components cover,
keyloggers
  1. password collection
  2. webcam capture
  3. data destruction
  4. persistence
  5. privilege escalation
  6. stealth
  7. anti-virus (PSP) avoidance
  8. survey techniques

Fine Dining
Fine Dining comes with a standardized questionnaire i.e menu that CIA case officers fill out.
The questionnaire is used by the agency's OSB (Operational Support Branch) to transform the requests of case officers into technical requirements for hacking attacks (typically "exfiltrating" information from computer systems) for specific operations.
The questionnaire allows the OSB to identify how to adapt existing tools for the operation, and communicate this to CIA malware configuration staff.
The OSB functions as the interface between CIA operational staff and the relevant technical support staff.
Among the list of possible targets of the collection are,
  • 'Asset'
  • 'Liason Asset'
  • 'System Administrator'
  • 'Foreign Information Operations'
  • 'Foreign Intelligence Agencies'
  • 'Foreign Government Entities'
Notably absent is any reference to extremists or transnational criminals. The 'Case Officer' is also asked to specify the environment of the target like the type of computer, operating system used, Internet connectivity and installed anti-virus utilities (PSPs) as well as a list of file types to be exfiltrated like Office documents, audio, video, images or custom file types.
The 'menu' also asks for information if recurring access to the target is possible and how long unobserved access to the computer can be maintained.
This information is used by the CIA's 'JQJIMPROVISE' software (see below) to configure a set of CIA malware suited to the specific needs of an operation.
Improvise (JQJIMPROVISE)
  1. 'Improvise' is a toolset for configuration, post-processing, payload setup and execution vector
  2. selection for survey/exfiltration tools supporting all major operating systems like,
  3. Windows (Bartender)
  4. MacOS (JukeBox)
  5. Linux (DanceFloor)
  6. Its configuration utilities like Margarita allows the NOC (Network Operation Center) to customize tools
based on requirements from 'Fine Dining' questionnaires.
HIVE
HIVE is a multi-platform CIA malware suite and its associated control software.
The project provides customizable implants for Windows, Solaris, MikroTik (used in internet routers) and Linux platforms and a Listening Post (LP)/Command and Control (C2) infrastructure to communicate with these implants.
The implants are configured to communicate via HTTPS with the webserver of a cover domain; each operation utilizing these implants has a separate cover domain and the infrastructure can handle any number of cover domains.
Each cover domain resolves to an IP address that is located at a commercial VPS (Virtual Private Server) provider.
The public-facing server forwards all incoming traffic via a VPN to a 'Blot' server that handles actual connection requests from clients.
It is setup for optional SSL client authentication: if a client sends a valid client certificate (only implants can do that), the connection is forwarded to the 'Honeycomb' toolserver that communicates with the implant.
If a valid certificate is missing (which is the case if someone tries to open the cover domain website by accident), the traffic is forwarded to a cover server that delivers an unsuspicious looking website.
The Honeycomb toolserver receives exfiltrated information from the implant; an operator can also task the implant to execute jobs on the target computer, so the toolserver acts as a C2 (command and control) server for the implant.
Similar functionality (though limited to Windows) is provided by the RickBobby project.
See the classified user and developer guides for HIVE.

Frequently Asked Questions

Why now?
WikiLeaks published as soon as its verification and analysis were ready. In February the Trump administration has issued an Executive Order calling for a "Cyberwar" review to be prepared within 30 days.
While the review increases the timeliness and relevance of the publication it did not play a role in setting the publication date.
Redactions
Names, email addresses and external IP addresses have been redacted in the released pages (70,875 redactions in total) until further analysis is complete. Over-redaction: Some items may have been redacted that are not employees, contractors, targets or otherwise related to the agency, but are, for example, authors of documentation for otherwise public projects that are used by the agency.
Identity vs. person: the redacted names are replaced by user IDs (numbers) to allow readers to assign multiple pages to a single author. Given the redaction process used a single person may be represented by more than one assigned identifier but no identifier refers to more than one real person.
Archive attachments (zip, tar.gz, ...), are replaced with a PDF listing all the file names in the archive. As the archive content is assessed it may be made available; until then the archive is redacted.
Attachments with other binary content, are replaced by a hex dump of the content to prevent accidental invocation of binaries that may have been infected with weaponized CIA malware. As the content is assessed it may be made available; until then the content is redacted.
Tens of thousands of routable IP addresses references, (including more than 22 thousand within the United States) that correspond to possible targets, CIA covert listening post servers, intermediary and test systems, are redacted for further exclusive investigation.
Binary files of non-public origin, are only available as dumps to prevent accidental invocation of CIA malware infected binaries.
Organizational Chart
The organizational chart (far above image) corresponds to the material published by WikiLeaks so far.
Since the organizational structure of the CIA below the level of Directorates is not public, the placement of the EDG and its branches within the org chart of the agency is reconstructed from information contained in the documents released so far.
It is intended to be used as a rough outline of the internal organization; please be aware that the reconstructed org chart is incomplete and that internal reorganizations occur frequently.
Wiki pages
"Year Zero" contains 7818 web pages with 943 attachments from the internal development groupware. The software used for this purpose is called Confluence, a proprietary software from Atlassian.
Webpages in this system (like in Wikipedia) have a version history that can provide interesting insights on how a document evolved over time; the 7818 documents include these page histories for 1136 latest versions.
The order of named pages within each level is determined by date (oldest first). Page content is not present if it was originally dynamically created by the Confluence software (as indicated on the re-constructed page).
What time period is covered?
The years 2013 to 2016. The sort order of the pages within each level is determined by date (oldest first).
WikiLeaks has obtained the CIA's creation/last modification date for each page but these do not yet appear for technical reasons. Usually the date can be discerned or approximated from the content and the page order.
If it is critical to know the exact time/date contact WikiLeaks.
What is "Vault 7"
"Vault 7" is a substantial collection of material about CIA activities obtained by WikiLeaks.
When was each part of "Vault 7" obtained?
Part one was obtained recently and covers through 2016. Details on the other parts will be available at the time of publication.
Is each part of "Vault 7" from a different source?
Details on the other parts will be available at the time of publication.
What is the total size of "Vault 7"?
The series is the largest intelligence publication in history.
How did WikiLeaks obtain each part of "Vault 7"?
Sources trust WikiLeaks to not reveal information that might help identify them.
Isn't WikiLeaks worried that the CIA will act against its staff to stop the series?
No. That would be certainly counter-productive.
Has WikiLeaks already 'mined' all the best stories?
No. WikiLeaks has intentionally not written up hundreds of impactful stories to encourage others to find them and so create expertise in the area for subsequent parts in the series. They're there.
Look. Those who demonstrate journalistic excellence may be considered for early access to future parts.
Won't other journalists find all the best stories before me?
Unlikely. There are very considerably more stories than there are journalists or academics who are in a position to write them.
submitted by CuteBananaMuffin to conspiracy [link] [comments]

Tropes vs Women, Part 2: Ms. Male Trope, Token Chicks, and the Smurfette Principle

Part 1 Source, with transcript
Note: this review is probably not as polished as I would like it to be. But time constraints are a bitch. Still, I hope that you can follow the general arguments.
Presented below are my thoughts, interpretations, and conclusions of nearly everything said in this video, written with the intent that we discuss the arguments themselves in a more holistic sense rather than target a single aspect of them (or a single aspect of another video) to discredit the entirety. But since such a large body of text may not be conducive to conversation, here are some questions to ask in order to start a line of dialog:
  1. What do you think of the video’s conclusions? And, if different from mine, which passages did you interpret differently—and why—to reach a different conclusion?
  2. One of the key points made in the video is that representing a large demographic with a single character is prone to engendering a disproportionately negative response. While Anita focuses on female representation, and therefore discusses a topic that plenty of males may not readily empathize with, I would like to ask if the example of the Chris Hemsworth character in the Ghostbusters reboot is a good example of how the “token dude” phenomenon can likewise elicit a negative response among men? Obviously, this is a more extreme example, but it remains one that nonetheless can showcase the emotions that such a trope can induce. Do you think that the Ghostbusters reboot provided a compelling example of how the negative handling of the ‘token dude’ can irritate a demographic it may represent?
  3. It’s understandable that the points made in this video are presented in an authoritarian, definite tone and hence may imply little ambiguity and that the effects of certain media can be ‘The Worst Thing Ever’TM . However, isn’t it more educational to approach this with the question about whether it is possible for our environment to induce certain feelings rather than insisting that these feelings be surveyed? That is, do you think it’s required to use research to suggest what emotions that people can feel from art? Even if a thoughtful, introspective, and empathetic approach may yield conclusions different from what the author intended, wouldn’t this be a much more productive form of analysis than simply denying what others feel until data is available?
  4. Perhaps the most controversial point made in this video is that Ms. Male characters, when presented in sufficient numbers, can cause impressions that females are ‘derivative humans.’ I can imagine the thought momentarily passing through the minds of people, before quickly being disregarded, so I agree with the premise that Ms. Male inculcates some notions but I doubt that any effect beyond a transient musing will occur (except among certain people with a strongly religious mindset). Still, I thought that a lot of the other points earned much more validity. Is it fair to disregard every argument just because one argument may be wrong?
After reviewing Part 1 of Anita Sarkeesian’s analysis on the Female Tropes in Videogames, some common complaints were raised, namely that interpreting any single example of her work in isolation is unacceptable (this mainly manifests as a vitriolic remark about a misunderstanding of just one comment from one game, Hitman, ironically enough, but we might review that next time). Another criticism is that insufficient evidence is provided (only some of her videos cite research), but let’s not use this to dismiss arguments altogether and instead tackle this like we’re tackling the likes of the psychologist Sigmund Freud and certain high-concept economists: right or wrong, they show that theories can be introduced to reevaluate what we know, to make us think at a different level, and to spark conversation—that is, by challenging our preconceived notions, we can get a better sense of what we believe, why we believe, and what we should believe, as well as develop a more comprehensive body of work illustrating this debate outside of trollish accusations. People didn’t go around disproving Freud by calling him a liar, troll, idiot, or—if he were Anita—an SJW, nor by insisting on proof that we have certain subconscious desires, and then calling it a day. Let’s just look at the arguments and ask ourselves whether being exposed to X may lead to some thinking Y. This is ultimately an analysis intended to draw out some of the less superficial discourse. (And I swear, if another person brings up Hitman as an excuse to discredit everything she and I say in this analysis, then you clearly have no intention to argue the subject directly, or at all).
Since this review involves assessing an author’s given body of work without their clarification, I will aim to be as fair as possible and try to read what’s presented in good faith. This is because even the basic process of talking to someone about different opinions frequently yields clear discrepancies with how language is used and interpreted. As much as we are convinced that words have definite meanings and connotations, the truth of the matter is that everyone has different pasts, brain chemistry, context, and priorities, which means that everyone has a rich internal world which contains a vocabulary of potentially endless differences from anyone else’s, and that these differences often don’t even become apparent until people have been arguing for hours. My personal (least) favorite example is when I spent days arguing with someone who inexplicably said, ”Yes, the jungles will regrow under those conditions” one day and then, “No, the jungles cannot heal under those conditions” the next, over and over this repeated until it was realized that I was using the words “heal” and “regrow” interchangeably and he was too pedantic and dumb to say, “Hey, I know that they mean the same to you but they don’t to me; forests are composed of many life-forms and therefore cannot technically ‘heal’, so please use ‘regrow.’” Seriously, days of talking in circles—and all because he was too set with his definitions to bother correcting someone else. Another example is that someone was espousing free speech … while favoring censorship … under the notion that “free speech” means “deleting anything that makes people angry.” Why would anyone claim that free speech is about utilitarianism is beyond me. So basically, I know that my brain is programmed to think words mean a certain thing, and it will interpret someone in that specific way even if it makes the other person sound like they need mental help. So, I will aim to be more than fair so that I won’t resort to the first knee-jerk reaction that leads me to accusations which tarnish all desire to learn further. You are free to provide differing interpretations, but you should point out specific parts in the treatise that are wrong or need further context. You can also provide instances when Anita was more explicit about a specific thing she said.
I will also use whatever conclusions derived from here to reflect on conclusions from prior posts. We’ll see if my mind changes once I've learned a bit more about Anita’s philosophy.
Another common complaint heard about Anita is that ‘She’s trying to convince people that gaming will turn them sexist, while cultivation theory states that they will more likely just think that the world is more sexist, plus the theory cannot turn someone into something they’re not already predisposed to.’ Perhaps she makes the claim that gaming creates sexists in another video, but here I don’t think that’s the case. She uses terms like “reinforce” and “normalize” to suggest that preconceived notions of sexism already present in individuals and society are perpetuated (rather than created) and that we don’t actually go around thinking of women as weak or inferior because of the Ms. Male and associated tropes. For the latter, it seems more like she’s saying men may ascribe impressions that we think are ‘normal’, or in other words ‘harmless’, that females may not fully appreciate. For example, the (somewhat outdated) stereotype, “Girls don’t play games” is simply used by the adherents as a matter of fact, neither good nor bad, not a statement of desiring exclusion but rather a reflection of observations we have from our exposure to other people (at least for most), while girls who want to game may wonder how such a silly stereotype ever came into prominence. That is, innocuous thoughts that girls are “different” or “other” may spread and, without conscious thought of their implications, other people might feel alienated and that they are perceived in a negative light.
Okay, now let’s review the video itself.
This episode centers around several different topics that are united here by how they “reinforce a false dichotomy wherein male is associated with the norm while female is associated with a deviation from the norm.” We’ll discuss the accuracy and emotional impact of this observation later; for now, we’ll discuss the overall premise of the argument, then the imagery itself, and then the process for how concepts are extrapolated from it.
This norm-deviation dichotomy is ultimately done in two ways: (1) men, in abstract, are often depicted as the default, with women depicted by having features added onto the male frame. And (2) likewise when women are represented by a single individual, then their characteristics are generalized in sometimes less than appealing ways.
Now that we have the short explanation out of the way, let’s delve into it with more detail about how art can define female characters as “derivative copies of men.” This seems to more closely occur in instances when the art is simplistic, with characters defined by symbolism, and these symbols are almost invariably added onto the male version. Add a bow, hair, eyelashes, makeup, or jewelry to a male figure and suddenly a female one is formed. In contrast, it’s rare to add, for example, a beard, baseball cap, or tie to a female character to generate a male one. Okay, so from this simplistic observation we can see that art may be construed in such a way, that the phenomenon exists to some degree. So what ideas people may gleam from that?
I disbelieve the criticism that Anita thinks that these tropes turn gamers into overt misogynists: the tropes exist because people think that it’s correct to put these portrayals in their art. According to the video, people don’t actively think these thoughts in a negative light, nor make these generalizations with deprecating intent. Therefore, the tropes themselves are looked at as normal and therefore benign, not worthy of a second thought unless one is made aware of the trend and considers their implications. Each example is no concern by itself, but together they establish a pattern that people can observe to get a sense of how women are thought about vis a vis how they’re portrayed. But what myths are normalized and perpetuated unconsciously by this? For a more overt example, someone put a bow on Pac Man to create Ms. Pac Man, which was considered normal; and then the next person thought it was normal to put a bow on their own “Ms. Male” character and the next and the next until Ms. Explosion Man in 2011 (close to when this video was released). All of this is indicated here:
“And taken on their own, each individual example we’ve covered in this episode might seem relatively benign or trivial, but the reason this series focuses on tropes is because they help us recognize larger, recurring patterns. Both the Ms. Male Character and the Smurfette Principle have been normalized in gaming and in mass media more broadly. So much so that the two tropes usually pass under the radar and are often reproduced unconsciously – which is part of what makes the myths they perpetuate about women so powerful and insidious in our culture.”
“There is no inherent problem with the color pink, makeup, bows or high heels as design elements on their own. However, when designers choose … to specifically distinguish female characters from the rest of the cast … it has a few negative consequences.”
But no, that’s clearly not it. There’s more. What myths are “powerful and insidious”, and what are these “few negative consequences”? We’ll look to that in just a bit.
The second method of the dichotomy presented by Anita is similar. Though it doesn’t seem to go so far as to suggest that women are “derivative”, it does serve to present girls as “different” by virtue of likewise adding features onto a gender depicted by a single representative in an otherwise all-male group. This would be the “Smurfette Syndrome” or “Token Female Character”, which are prone to having their select few female characters often restricted –in terms of appearance and personality traits— by these avatars having the role of representing women as a whole. If done with symbols that are simply added onto a male frame, then the sense of derivation is more explicit. But if it’s done with actors and more realistic animation, then it’s less about derivation and more about establishing key differences, differences which are liable to be of poor taste. The idea is that representing several different women encourages the creation of different personalities and looks to grant them more fleshed-out characterization and appearances, while being limited to one person tends to narrow the freedom of artists who find themselves needing to define such a large demographic in a way deemed universal and therefore liable to be stereotypical or reductive. By limiting an entire gender to a single character and then applying a stereotypical trait, the woman in question can be denoted by a generalized fashion sense or they may also appear to have Female Personality Syndrome (especially if they're villains), which essentially means that their character flaw reflects a problem presumed common among women (bitchy, emotionally volatile, etc.).
As for what kind of impressions, in particular, they inculcate, they may just feel lazy and use an outdated range of gender signifiers that no longer seem sensible:
“The truth of the matter is that there’s really no need to define women as derivative copies of men or to automatically resort to lazy, stereotypical or limiting gender signifiers when designing video game characters.” So, it seems that there’s 4 aspects of these insidious, negative consequences:
  1. Being considered derivative is bad.
  2. Gender signifiers can be lazy.
  3. Gender signifiers can be stereotypical.
  4. Gender signifiers can be limiting. [e.g. fashion-restricting]
I addition, I would like to add a 5th, this one specifically directed as a consequence of the ‘derivative’ argument: 5. “This has the, perhaps unintended, effect of devaluing these characters and often relegating them to a subordinate or secondary status inside their respective media franchises, even when they are, on rare occasions, given a starring role in a spin-off or sequel.”
For the first of these two topics, the ‘argument of derivation’, I don’t believe Anita actually presumes that guys think, “You are just a spinoff of me, therefore inferior”, nor the inverse by women (unless one takes the story of Adam and Eve a bit father than most), but rather that, by the trope’s ongoing use, it continues to become acceptable to use imagery that can be interpreted this way, potentially corroborating similar notions. I'm intrigued by the notion that it ‘devalues’ the characters, especially with them being considered subordinate, but I think the only example she gave of that was Mass Effect’s portrayal of FemShep in marking. So, while I'm open to the possibility that this Ms. Male characters are looked upon without as much appreciation, I’ll need some examples to showcase this, and certainly more than just ‘they’re less appealing because they suggest being derivative’ argument. I’ll go into more detail about other impressions when covering the Pac Man example, mainly because I'm getting tired of writing this and moving huge blocks of text around for the sake of editing would be annoying. Still though, I can more readily grasp the notions that Ms. Male can be considered lazy and stereotypical.
For the second of these two topics, I would say the inverse could happen for men: if there’s one guy among a female cast and he’s dumb or brutish, then men may look at him and think, “This guy seems to have Male Personality Syndrome.” Or they’ll just cry SJW and whatnot, same difference. Let’s consider an example from the Ghostbusters movie reboot. In that movie, the main cast is defined by several women, each with their own distinguishing personalities, and they’re supported by one guy whose most clearly defined trait is being a complete dumbass. I'm going to say it: guys did not respond favorably to that portrayal, so it’s not unreasonable that girls had similar thoughts whenever they underwent the Smurfette Principle. Now imagine that more and more movies are presenting guys in such stereotypical fashions while women at least get more variety in their roles. I think that some guys would develop a greater resentment to the film industry than they exhibit now. A person’s mileage may vary, but I think it’s reasonable for some people to have a poor reception of their group when it’s presented in a lazy, cliché, and perhaps even derogatory way by a single character meant to reflect such a large and diverse population. Personally, I think that anti-SJWs who make these sorts of claims are acting like childish buffoons, so, if I want to be fair by treating different groups equally, then the outcry of feminists over similar matters deserves a “shut up and let the artists make whatever they want” too. Yes, I believe that their angst is real, and artists should listen to this sort of criticism, but there does come a point when criticism just turns into whining and I won’t want to hear any more of it. So I agree with Anita, but with reservations about how far she wants to complain about this specific issue. Making token or Smurfette characters is a concern, but it’s one that deserves merely a ‘token’ remark of dissatisfaction and not a rant or accusations against the writedirector.
So let’s see how Anita covers the tropes and how she references the 54 games to discuss them and their inversions.
From the very start, Anita introduces Pac Man for being the archetypical source of this episode’s trope. And, by doing so, she introduces an interesting point. She quotes the creator of Pac Man saying that the game was intended to be appealing to women by being based around the action of eating. She calls it a sexist mind-set, but then adds that these “regressive personal or culture notions” are, “not reflected in the finished game itself.”
This is interesting because it seemed that, in the previous review, she was concerned about accidental misogyny depicted by clearly feminine avatars. Basically, by combining these two arguments, she now clearly divorces authorial intent completely: if it looks sexist or plays sexist, then that alone suffices for it to be sexist, regardless of the author being sexist. Ultimately, what this means is that she’s focused on just two factors: representation and how it affects men and women. It doesn’t matter if it was accidental: if a demographic is put into a bad light, then the art sends a bad message about that demographic that can promote bad impressions.
And then she provides a very accurate history of how college students offered the American developer of the game, Midway, a new take on the character. As an aside, I’d like to comment about another instance when the introduction of Ms. Pac Man was described in a way that some people, including some noteworthy anti-SJWs, admonished for nitpicking reasons. It’s no big deal, but Anita talking about Midway brought back memories of people ridiculing Adam Conover, in his show Adam Ruins Everything, for saying that “the developers” made this second version of the game. Yeah, Midway helped with developing the original game, at least on the porting side, so they technically were Pac Man developers; and they accepted the designs for the new version to finish programming it, built arcades for it, and shipped it around the country, so they didn’t start Ms. Pac Man but they did help make it. So Adam’s story was true, if kept simple to maintain a 4-minute runtime. People just tried really hard to call him an idiot and a liar for that, among other complaints. Here it is if interested.
So yeah, Ms. Pac Man became a beloved character for years afterwards and earned the title of being the first Ms. Male in videogames, participating in the legacy of Minnie Mouse, Supergirl, and other characters since the early 20th century.
So how do we differentiate between the man and the woman? Though not mandatory, this is done by the addition of “stereotypical design elements”— “arbitrary and abstract” ones— such as:
  1. Bows (headwear and hair being most common)
  2. Color (i.e. pink for women, another very common case)
  3. Red lipstick
  4. Eye with makeup, lashes
  5. Mole as a beauty mark
  6. Long legs, high heels, jewelry, and a boa (in promotionals)
  7. Pigtails
  8. Painted nails
  9. Midriff-bearing outfits
  10. Exaggerated breasts
  11. Heart motif
I can sort of see her point. If “Ms. Pac Man” was instead the original icon, just as a plain yellow circle, then if the male version was portrayed with a beard, baseball cap, or tie, then I might have the question, “Why am I being defined by these ‘extra’ traits that I don’t even have nor want?” For example, most men these days shave off their facial hair, so what does it mean for a character who represents men to have a beard? Why did we choose to deviate from this physiological norm? Or if I see more cases when guys wear baseball caps, would I continue to not wear caps with the thought that I am going against a common fashion? Sure, in the grand scheme of things, with the thousands of games that showcase so many different ways to portray men and women, and the sparse few dozens that make use of such simple methods of differentiation (and in a cartoonishly abstract way, no less), then I might consider those cases to be outdated outliers and move on. In addition, the changing of fashion and the sheer variety of ways that we can express ourselves via clothing can counter things like the ‘bow tie’ stereotype—especially when it’s used infrequently, often in older and simpler games, and therefore be considered an outdated mode of thought. Seeing a character wear a 19th century top hat, as another example, would bear no applicability to the fashion of current time and therefore not apply to myself, and the same could be said about women and Ms. Pac Man (more on that later). At worst, I think most people would raise the question internally and then dismiss it, with maybe some lingering thoughts about what these sorts of tropes connote. “Why is it considered acceptable to be depicted in such a basic way?” Or maybe the response would be about as bad in the reverse case displayed in the Ghost Busters reboot. I think it’s safe to say that notions can be raised based on how we observe how we’re represented in art, though to what extent relies heavily on other sources of influence, the predominance of the art in question, and how legitimately we may construe the author’s intent as having verisimilitude. So, I'm tentatively concluding that self-impressions would hardly be affected by the likes of Ms. Pac Man and other Ms. Male characters. However, I do think that restricting designs to specific features is more or less always considered lazy—if I saw a bunch of games where guys are clearly delineated by features such as baseball caps, ties, and beards, then I would consider that a limited design (especially as someone who cares little for baseball caps). So more often than not, I think the general response to seeing such a basic use of clothing to denote gender would cause one to scoff at the unoriginality of the authors. So I would agree with Anita that these depictions are, at the very least, not in good taste.
Then she introduces the concept of the Smurfette Principle. Why? I guess it’s because those characters typically display Personality Female Syndrome that ties it with Ms. Male, or else it’s to emphasize another way that girls may feel like they are the outliers: “Both the Smurfette Principle and the Ms. Male Character trope create scenarios that reinforce a false dichotomy wherein male is associated with the norm while female is associated with a deviation from the norm.” She follows that with the “Token chick” phenomenon, wherein a group of guys in a more typical society also includes one girl. I think it’s say to say that there are good ways and bad ways of handling a ‘token’ character, so it’s not a problem in and of itself, but it’s also the case that handling a token character in the wrong way produces a disproportionately larger negative effect than if a single non-token character is presented poorly. Again, if the Ghostbusters movie featured 2 main male characters within the team, one being a doofus and the other a normal guy, then the reception would have been largely mitigated.
Anita made an interesting comment about how the limited choice of stereotypical clothing, derived from the Ms. Male trope, reduces the ‘continuum’ of options for how women may want to present themselves. For women, I will try to take my interpretation as a man and see if corollaries can be present for the other gender. I cannot speak for the historical trends of women’s fashion since the time of Ms. Pac Man, so the idea of the impact of what’s considered archetypical fashion by game developers (like bows, long hair, and makeup), and how that (or impressions of that) changed over the years, is purely anecdotal. For all I know, bows were in style in the 80s, but since then largely went away while still remaining a bit of ‘girl clothing’ in the minds of the population, as evident by Ms. Splosion Man in 2011. Likewise, makeup is not considered mandatory, but it remains as part of the ‘female kit’ so to speak. Hair, however, still remains dominantly long. So, stereotypes involving appearances like these do call to mind as being exclusive to women, but they’re not strictly adhered to (though women with shaved heads may turn some eyes). While Anita claims that these stereotypes limit the continuum of appearance options, I think the countering cultural push for variety over the past several decades, not to mention the greater attempts at diversifying women in media in recent years, has overcome any noteworthy effect of the Ms. Male trope. I mean, sure, the continuum is limited in the sense that shaved heads are nearly nonexistent among women, and considered odd, but I can’t really think of other examples. Heck, if anything, men are more limited: stockings, skirts (except kilts), makeup, earrings, dyed hair, long hair, and so many other accessories and appearance options are either out of the question or not looked favorably. Men really only have the option of having hair or shaving it. Based on that observation, while it is theoretically possible that art can influence boundaries in fashion, I think that other societal influences work much more strongly and faster, generating changes that soon make the standards set in older, more simplistic media quickly irrelevant. Outside of formal attire, women can wear basically anything that men can wear. Yes, women in plenty of games were displayed with a basic set of appearances, but now their real-life collection of options is bigger than ever and far surpass those of men. Sure, some girls may look at that imagery and question how it relates to them, but in practice those thoughts are irrelevant. Perhaps men are restricted by the appearance tropes in media too, only much more so, and in this case Anita's point would be that much stronger; but Anita contradicts this line of thought by saying that men have few male-defined accessories and that they’re not rigidly enforced. I guess it would be cool, if culture were not to have any influence, and that we would live in a time when anyone can wear anything, but I think that we, as creatures of habit, would continue to differentiate clothing by gender like as usual, maintaining typical roles until a sufficient number of outliers grow into prominence and then acceptance.
Another common complaint is that Anita never, or at least far too rarely, lists counterexamples or else dismisses them for having different standards. Well, here are those exceptions, and here’s the rationale for why they don’t share the inverse of the sexism that she ascribes to conventional applications of Ms. Male. Feel free to explain how these interpretations are incorrect or are missing key examples that counter her points:
Exceptions with color inversions are rare, such as Kirby, Bomberman, or Roy Koopa, but those are meant to reflect childlike aesthetics rather than to denote gender.
Exceptions with accessory inversions are rare, like with bows (Super Mario Bros 2) and high heels (unknown, Bare Knuckle III?) and lipstick (Super Punch-Out), but those are typically jokes.
Exceptions with the presence of male-defined accessories are not ubiquitous nor strongly enforced: Men also have accessories, like neckties and caps.
She then talks about Angry Birds:
To help illustrate one of the ways the ”male as default” phenomenon operates in gaming worlds let’s take a look at the mobile mega-hit Angry Birds.
Basically, she said that the birds displayed no specific traits that indicated either gender. But then an explicitly female bird was introduced and that warranted further design changes to make some of the earlier, gender-nonspecific birds overtly female. This could suggest that, when making designs, women have to be designed with a bit more intentionality and therefore must utilize particular signifiers—and that, since there are a limited number of signifiers, then women are stereotyped with those particular traits even further. I can sort of empathize with this too. What if I was playing a game, assuming that some of the characters were men, only to have one introduced with a beard, cap, or tie and that lead to retroactive gender assignments that change my impressions?
Anita introduces the Mass Effect series, in which the main character is playable as either a man or a woman, to say that Male Shepard was considered the ‘default in marketing.’ Some attempts that “feel” like afterthoughts or niche specialty marketing and not a substantial/equitable approach involve an alternate slip cover and web-only trailer. Essentially, what she is trying to say is that the Ms. Male, even as a major component of a blockbuster videogame with a huge commercial and advertising budget, is treated with far less applaud than warranted, with the female version not being portrayed as a selling point but rather as a subject that fans have to go a little out of their way to find. And marketing isn’t the only manner of distinction: the female version has a dedicated fanbase who frequently refers to her as “FemShep”, which, although meant as an affectionate nickname, does further highlight her designation as a Ms. Male Character. She is the one with the qualifier attached to her name: she is “Female Shepard” whereas the male version simply gets to be, “Shepard”.
Random thought: Would it have been considered appropriate to introduce a section on Laura Croft and “The male Tomb Raider” Nathan Drake from Uncharted? It’s probably not quite as related as the other examples provided in this video, but it would have been interesting to see how people have handled the introduction of a male character in a game genre popularized by a woman.
Overall, while I think I do see her points, this is a rather harmless trope, with consequences that barely exceed a level of, “Well, that’s a lazy, stereotypical, and altogether inaccurate or outdated representation that no-one should take seriously.” Like any such examples of poor artistic integrity, it deserves criticism, so that artists can learn from their mistakes and develop their creative skills, but it shouldn’t reach a level of outrage nor be used as an example that’s causing harm in how people see themselves. Perhaps Anita went a bit far in some cases to depict women as feeling like second-class citizens because Ms. Pac Man is basically just Pac Man with some accessories, but it did strike a conversation about how depictions in media can vary audience reception based on how many—or too few—characters are present to represent a large demographic. In the end, I enjoyed how such a short video has encouraged me to think outside of my comfort zone and how I was encouraged to try seeing things in someone else’s shoes.
Addendum
This is the list of ‘general consequences’ I wrote while listening to Anita's points so that I could generate an overall idea of her arguments. - It reinforces a “strict, binary form of gender expression”, which is “an entirely artificial and strict binary” into “two distinctly separate and opposing classes.” This specifically erases the continuum of gender presentations that fall outside this dichotomy. - Women are “marked” while men remain largely unmarked. This leaves women with fewer forms of expression while male characters (such as the Koopa brothers) are better able to express distinct characteristics like intelligence, playfulness, and arrogance. Whereas women are largely left defined as being “female.” - (or else be given a one-dimensional personality of shallow female stereotypes, e.g. vanity, brattiness, rage, and being spoiled. This is called Personality Female Syndrome. - The girls are always depicted in relation to their male counterpart, or as just something that came from another source. Ms. Pac Man is just Pac Man with a bow, etc. - They “typically aren’t given their own distinctive identities and are prevented from being fully realized characters who exist on their own terms. This has the, perhaps unintended, effect of devaluing these characters and often relegating them to a subordinate or secondary status inside their respective media franchises, even when they are, on rare occasions, given a starring role in a spin-off or sequel.” - This idea of being “cast” from an “original concept” reinforces a subordinate view of women, like with Adam and Eve. That is, men tend to be seen as “default” human beings.
submitted by Rincewind00 to truegaming [link] [comments]

The Top 4 Technical Indicator for Profitable Trading Basic Intro to Nadex Indicators A to Z Part 2 Top 3 Technical Analysis Indicators Technical Analysis Forex Trading for Beginners Technical Analysis Binary Options Iq Option Using technical analysis with Nadex charts

Technical indicators make it easy for you to identify current price trends and predict where prices will move in the future. By developing effective technical analysis strategies, you can increase the amount you earn each trading day. However, while all technical indicators are useful, they each have their own set of weaknesses. In developing a strategy based on the binary options trade types to be traded, there are tools that can assist the trader. This is where chart patterns, signals services, candlesticks and technical indicators will come in. A simple tool like the pivot point calculator can be used as part of a TOUCH trade strategy with very effective results. In addition to the above-mentioned technical indicators, there are hundreds of other indicators that can be used for trading options (like stochastic oscillators, average true range, and Binary options ‘robots’ are software that find and show these technical analysis-based patterns and the likely upcoming price movement they indicate for the instrument so the trader doesn’t have to do the technical analysis themselves..The best binary option analysis software Binary Options Strategy A "Secret" that will increase your hit Leading Indicators vs Lagging Indicators In Technical Analysis • offer an early warning about the current market price • predetermine which direction to trade • offer accurate target prices and optimal entries on the market. Here are the most useful leading technical indicators, to help you trade the stock market. Best Leading Indicators For Forex And Stock Market

[index] [16442] [27641] [10782] [25264] [29267] [22349] [10159] [20407] [28009] [28316]

The Top 4 Technical Indicator for Profitable Trading

Many binary options strategies depend on technical analysis where you look for certain patterns in the way the price of an asset moves, and try to make predictions about future price movements ... Top 3 Technical Analysis Indicators 🔥 Technical Analysis 🔥 Forex Trading for Beginners POWER OF TRADING. ... iq option non repaint indicator best for binary trading 2019 ... Technical Analysis Binary Options Iq Option ... of the Asset over the last few hours leading up to the Trading Period where we want to start trading. ... Analysis Indicators - Technical Analysis ... Using Leading Indicators for Trading OTM Binaries - Duration: ... Review of the Most Common Technical Analysis Techniques ... How to Trade Out of the Money Binary Options with Nadex ... How to use indicators Technical analysis for Forex Technical analysis for trading ... The Most Powerful Binary Option Trading System ️ Binomo Trading Indicator - Duration: 8:27.

Flag Counter