Indiana vs ohio state volleyball score
The Big Ten Conference
2012.11.24 01:38 Corporal_Hicks The Big Ten Conference
A sports oriented Subreddit for the Big Ten Conference and all 14 of its member institutions.
2009.09.22 04:05 heega1 All Things Husker Related
Anything and everything about the University of Nebraska Cornhuskers, with a focus on Husker Football.
2010.09.13 00:34 Swazi University of Michigan Athletics, Football, Basketball, and News
A University of Michigan athletics community for news, discussion, and more. Particularly focused on Michigan Football and Basketball, but love for all things UofM. Go Blue!
2023.06.01 18:10 DagothHertil MoonSharp or How we combined JSON and LUA for game ability management
Introduction
During the development of our card game Conflux we wanted to have an easy way to create various abilities for our cards with different effects and at the same time wanted to write smaller amount of code per ability. Also we wanted try and add a simple modding capability for abilities.
Format we introduced
Some of you may be familiar with
MoonSharp LUA interpreter for C#, often use in Unity engine to add scripting support to your game. That's what we took as a base for writing the code for abilities. Each ability can subscribe to different events such as whenever a card takes damage, is placed on the field or ability is used manually on some specific targets. Besides having event handlers we needed a way to specify some metadata like mana cost of abilities, cooldown, icon, etc. and in the first iteration of the system we had a pair of JSON metadata file and LUA code file.
It was fine initially but we quickly realized that abilities typically have ~20 lines of JSON and ~20 lines of LUA code and that having two files per ability is wasteful so we developed a simple format which combines both the JSON and LUA.
Since LUA code could never really be a valid JSON (unless you are ok with slapping all the code into a single line or is ok with escaping all the quotes you have) we put the JSON part of the abilities into the LUA script. First LUA block comment section within the script is considered as a JSON header.
Here is an example of "Bash" ability in LUA (does damage and locks the target cards):
--[[ { "mana_cost": 0, "start_cooldown": 0, "cooldown": 3, "max_usage": -1, "icon": "IconGroup_StatsIcon_Fist", "tags": [ "damage", "debuff", "simple_damage_value" ], "max_targets": 1, "is_active": true, "values": { "damage": 5, "element": "physical" }, "ai_score": 7 } --]] local function TargetCheck() if Combat.isEnemyOf(this.card.id, this.event.target_card_id) then return Combat.getAbilityStat(this.ability.id, "max_targets") end end local function Use() for i = 1, #this.event.target_card_ids do Render.pushCardToCard(this.card.id, this.event.target_card_ids[i], 10.0) Render.createExplosionAtCard("Active/Bash", this.event.target_card_ids[i]) Render.pause(0.5) Combat.damage(this.event.target_card_ids[i], this.ability.values.damage, this.ability.values.element) Combat.lockCard(this.event.target_card_ids[i]) end end Utility.onMyTargetCheck(TargetCheck) Utility.onMyUse(Use)
Inheritance for abilities
It may be a completely valid desire to have a way to reuse the code of some abilities and just make some small adjustments. We solved this desire by having a merge function for JSON header data which will look for a parent field within the header and will look for the data based on the ID provided in this parent field. All the data found is then merged with the data provide in the rest of the current JSON header. It also does it recursively, but I don't foresee actually using this functionality as typically we just have a generic ability written and then the inherited ability just replaces all it needs to replace.
Here is an example on how a simple damaging ability can be defined:
--[[ { "parent": "Generic/Active/Elemental_projectiles", "cooldown": 3, "icon": "IconGroup_StatsIcon01_03", "max_targets": 2, "values": { "damage": 2, "element": "fire", "render": "Active/Fireball" }, "ai_score": 15 } --]]
So as you can see there is no code as 100% of it is inherited from the parent generic ability.
The way a code in the child ability is handled is that the game will execute the LUA ability files starting from the top parent and will traverse down to the child. Since all the logic of abilities is usually within the event handlers then no actual change happens during the execution of those LUA scripts (just info about subscriptions is added). If the new ability you write needs to actually modify the code of the parent then you can just unsubscribe from the events you know you want to modify and then rewrite the handler yourself.
MoonSharp in practice
MoonSharp as a LUA interpreter works perfectly fine IMO. No performance issues or bugs with the LUA code execution as far as I see.
The problems for us started when trying to use VS code debugging. As in it straight up does not work for us. To make it behave we had to do quite a few adjustments including:
- Create a new breakpoint storage mechanism because existing one does not trigger breakpoints
- Add customizable exception handler for when the exception occurs within the C# API. By default you just get a whole load of nothing and your script just dies. We added a logging and automatic breakpoint mechanism (which is supposed to be there but just does not work)
- Proper local/global variables browser. Existing one just displayed (table: 000012) instead of letting you browse variables like a normal human being.
- Passthrough of Unity logs to VS code during debugging. This one worked out of the box for the most part when errors were within the LUA code, but anything that happens in our C# API is only visible in Unity console (or Player.log) and when breakpoint is triggered good luck actually seeing that log with Unity screen frozen and logs not flushed yet (can flush the logs in this case I guess too?)
What is missing
While we are mostly satisfied with the results the current implementation there are a couple things worth pointing out as something that can be worked on:
- When you are done writing the ability you can't really know if the code you wrote is valid or if the data within the JSON header is valid. Linters within VS code I tried either complain about LUA code when highlighting JSON or ignore JSON when highlighting LUA code
- Good luck killing infinite loop within the LUA code (though same is true for C#). Execution limit needs to be implemented to avoid that problem, better to have invalid game state then having to kill the process.
- By placing the metadata of the abilities within the same code file you lock yourself out of the opportunity to have a unified place to store all your ability metadata (e.g. having a large data sheet with all the values visible to be able to sort through it and find inconsistencies). This can be addressed by having a converter from those LUA files to say CSV file or having a dedicated data browser within the game
Why not just write everything in LUA?
It is possible to convert the JSON header part into a LUA table. With this you get a benefit of syntax highlight and comments. The downside is that now to read the metadata for the ability you have to run a LUA VM and execute the script if you want to get any info from it. This implies that there will be no read-only access to ability information because the script will inevitably try to interact with some API that modifies the game state (at the very least adds event listener) or you will need to change the API to have a read-only mode.
Another point is that having a simple JSON part in the file let's you use a trivial script to extract it from the .lua file and it then can be used by some external tools (which typically don't support LUA)
TL;DR
Adding JSON as a header to LUA has following pros and cons compared to just writing C# code per ability:
Pros:
- Can hot-swap code with adjustments for faster iteration
- No compilation required, all the scripts can be initialized during the scene loading process
- Can evaluate the same code an ability would use within a console for testing
- Allows abilities to be modded into the game with lower possibility of malicious code (as long as exposed API is safe)
Cons:
- Requires compatibility layer between C# and LUA (you will still have to write API in C# to reduce the code bloat, but there is an extra layer that you need to write to pass this API to LUA)
- MoonSharp VS code debugging is buggier than using VisualStudio debugger for C#
- Doesn't really reduce the number of lines of code you need to manage. While you avoid boilerplate code, with smart management of this boilerplate code you can reduce it down to just a good base class implementation for your abilities with almost no overhead
submitted by
DagothHertil to
IndieDev [link] [comments]
2023.06.01 17:00 _call-me-al_ [Thu, Jun 01 2023] TL;DR — This is what you missed in the last 24 hours on Reddit
If you want to receive this as a daily email in your inbox, you can now join at this link
Germany: Ukraine can launch attacks on Russian territory to defend itself Comments Link White House: We are against strikes on Russian territory, but it’s up to Ukraine to decide Comments Link Russian Volunteer Corps and Freedom of Russia Legion announce breaking into Russia again Comments Link Woman who accused Biden of sexually assaulting her in 1993 defects to Russia
Comments Link
Trump captured on tape talking about classified document he kept after leaving the White House
Comments Link
Actor Danny Masterson convicted of two counts of rape at second Los Angeles trial
Comments Link
One in six people who had COVID-19 without first being vaccinated report still feeling health effects two years after the virus, according to Swiss research. 17% did not return to normal health and 18% reported covid-19 related symptoms after 24 months.
Comments Link
Earth has pushed past seven out of eight scientifically established safety limits and into “the danger zone,” not just for an overheating planet that’s losing its natural areas, but for well-being of people living on it
Comments Link
Researchers have shown that an Australian wild tobacco plant could be used to grow medicines in large quantities bringing us a step closer to making 'growing medicines in plants' a reality.
Comments Link
New 'quasi-moon' discovered near Earth has been travelling alongside our planet since 100 BC Live Science
Comments Link
Stunning Photo of Earth Taken by Europe's Powerful New Satellite
Comments Link
NASA’s UFO Research Team Briefs the Public
Comments Link
Scientists' report world's first X-ray of a single atom
Comments Link
Bill Nelson, head of NASA: 'We want to protect the water on the Moon to prevent China from taking it over'
Comments Link
New blood biomarker can predict if cognitively healthy elderly will develop Alzheimer’s disease
Comments Link
What would you ban if you knew you had final say?
Comments Link
What’s something most people find attractive that you can’t stand?
Comments Link
[SERIOUS] What organization or institution do you consider to be so thoroughly corrupt that it needs to be destroyed?
Comments Link
TIL of cascatelli, a new pasta shape invented in 2021 by podcaster Dan Pashman for maximum "sauceability", "forkability" and "toothsinkability"
Comments Link
TIL A chess robot in Moscow broke the finger of its 7-year-old human opponent after the boy made a quick move without waiting for the robot to complete its turn.
Comments Link
TIL that the acronym “R.I.P.” has been engraved on tombstones since at least the fifth century. “Rest in Peace” is the English translation of a Latin phrase with the same acronym.
Comments Link
How the job, nationality, and gender of celebrities have changed since the 1700s. [OC]
Comments Link
[OC] The United States of Nearest Neighbors. This is a map of the Continental US if the state borders were determined by the closest state capital (using the great circle distance).
Comments Link
Coors, Miller take Bud Light share amid controversy
Comments Link
Having a passion for cooking while being broke...
Comments Link
What are some good, simple sides to have with steak?
Comments Link
What’s the ideal cooking oil for cooking ground beef & chicken, frying taquitos in a pan, and stir frying?
Comments Link
[I ate] A 1lb Philly cheesesteak
Comments Link
[i ate] donuts
Comments Link
Pork rib sliders with bread and butter pickles on griddled keto buns [homemade]
Comments Link
Sergio Calderóne Dead: ‘Pirates Of The Caribbean,’ ‘Men In Black’ Actor Was 77
Comments Link
Official Poster for Yorgos Lanthimos’ ‘Poor Things’
Comments Link
New Poster for Indiana Jones and the Dial of Destiny
Comments Link
Sushi Manatee, Oddarette (Me), Digital Painting, 2023
Comments Link
Model 27. A Tribute to Hajime Sorayama, Adan Vazquez (me), acrylic on illustration board, 2023.
Comments Link
2 tuna cans pleas, by me, digital art, 2022
Comments Link
Danny Masterson Convicted on Two Counts of Forcible Rape, Faces 30 Years in Prison
Comments Link
Writers’ Shut-It-Down Strategy Has Been Effective, Executives Privately Concede
Comments Link
‘The Righteous Gemstones’ Adds Stephen Dorff, Iliza Shlesinger, Sturgill Simpson, and Five Others to Season 3 Cast
Comments Link
Arkansas VS Weed
Comments Link
Spotted in Cleveland, Ohio at a gas station. May, 2023.
Comments Link
Took a picture of my eye using the macro lens on my iPhone
Comments Link
I turn into a hot dog
Comments Link
NO STOPPING
Comments Link
He want to say hello to everybody
Comments Link
Making of Vennetta
Comments Link
This house that has a tunnel through a juniper bush to get to their front door
Comments Link
This car is full of bumper stickers that say bumper sticker
Comments Link
Gas in Ohio costs 1 cent per gallon
Comments Link
Dish towel used by R. Lee to surrender to Union forces, known as the final flag of the Confederacy
Comments Link
The elephant’s penis is prehensile. They can use it to prop themselves up, swat flies from their side and scratch themselves on their stomach.
Comments Link
Headquarters of the India National Fisheries Board.
Comments Link
The Nintendo Captcha System gave me an image of a dude taking a leak…
Comments Link
Cat sneezes into a bowl of flour.
Comments Link
*If birds were humans. *
Comments Link
Sport is life!
Comments Link
Help I am stuck on the sofa. What do I do
Comments Link
Red pandas eating red apples.
Comments Link
Get this as a daily email!
submitted by
_call-me-al_ to
RedditTLDR [link] [comments]
2023.06.01 16:18 OregonSageMonke I said she looked like Sméagol with with a wig and implants and they turned off commenting 🫤😔
2023.06.01 16:13 rustybelts Ranking FBS Programs by Flair:Enrollment Ratio
Some offseason drivel.
Simple ratio of number of instances of an FBS school's flair (including any alternate flairs) divided by the school's enrollment.
Note: This is a count of flair, not users. Example: If a user has [UCF] primary flair and [UCF alternate (the Citronaut)] secondary flair, that counts as 2 in the
Flair column rather than 1. In other words, users who double up on their school's flair offerings count twice. There certainly are users who double up, but I feel like it does not affect the numbers much. This method makes sure to capture users who do not use their school's standard flair and instead use their school's alternate flair. (Note: Not every school has alternate flair. * Bearcat tears 😿 *)
Data sources:
- Enrollment (Some of these are out of date by a few years. This was the easiest source to pull from. I am not going to track down current official enrollment numbers for 133 separate institutions.)
- Flair count (Accessed 05/31/2023)
Rank | Program | Conf. | Flair | Enrollment | Ratio |
1 | Notre Dame | FBS Independents | 7,745 | 13,139 | 58.9% |
2 | Michigan | Big Ten | 17,127 | 50,278 | 34.1% |
3 | Oregon | Pac-12 | 6,789 | 22,257 | 30.5% |
4 | Alabama | SEC | 11,674 | 38,316 | 30.5% |
5 | Ohio State | Big Ten | 18,118 | 61,677 | 29.4% |
6 | Nebraska | Big Ten | 6,963 | 24,431 | 28.5% |
7 | Navy | American | 1,249 | 4,528 | 27.6% |
8 | Oklahoma | Big 12 | 7,349 | 28,042 | 26.2% |
9 | Georgia | SEC | 10,467 | 40,118 | 26.1% |
10 | Army | FBS Independents | 1,189 | 4,594 | 25.9% |
11 | Clemson | ACC | 6,127 | 27,341 | 22.4% |
12 | Tennessee | SEC | 6,856 | 31,701 | 21.6% |
13 | Auburn | SEC | 6,349 | 31,526 | 20.1% |
14 | LSU | SEC | 7,170 | 35,912 | 20.0% |
15 | Texas | Big 12 | 10,325 | 51,991 | 19.9% |
16 | Miami | ACC | 3,307 | 19,096 | 17.3% |
17 | Florida | SEC | 9,042 | 55,781 | 16.2% |
18 | TCU | Big 12 | 1,932 | 11,938 | 16.2% |
19 | Penn State | Big Ten | 7,297 | 47,560 | 15.3% |
20 | Iowa | Big Ten | 4,447 | 29,909 | 14.9% |
21 | Air Force | Mountain West | 613 | 4,181 | 14.7% |
22 | Florida State | ACC | 6,392 | 45,130 | 14.2% |
23 | Wisconsin | Big Ten | 6,543 | 47,932 | 13.7% |
24 | Michigan State | Big Ten | 6,394 | 49,659 | 12.9% |
25 | Arkansas | SEC | 3,675 | 29,068 | 12.6% |
26 | South Carolina | SEC | 4,455 | 35,471 | 12.6% |
27 | Virginia Tech | ACC | 4,615 | 37,279 | 12.4% |
28 | Stanford | Pac-12 | 2,132 | 17,680 | 12.1% |
29 | Texas A&M | SEC | 8,597 | 72,530 | 11.9% |
30 | West Virginia | Big 12 | 2,996 | 25,474 | 11.8% |
31 | Oklahoma State | Big 12 | 2,770 | 24,660 | 11.2% |
32 | Baylor | Big 12 | 2,238 | 20,626 | 10.9% |
33 | USC | Pac-12 | 4,995 | 49,318 | 10.1% |
34 | Georgia Tech | ACC | 4,426 | 43,844 | 10.1% |
35 | Tulsa | American | 384 | 3,832 | 10.0% |
36 | Kansas State | Big 12 | 1,940 | 20,229 | 9.6% |
37 | Iowa State | Big 12 | 2,895 | 30,708 | 9.4% |
38 | Ole Miss | SEC | 1,957 | 21,203 | 9.2% |
39 | Kentucky | SEC | 2,782 | 30,390 | 9.2% |
40 | Washington | Pac-12 | 4,721 | 52,439 | 9.0% |
41 | Vanderbilt | SEC | 1,178 | 13,796 | 8.5% |
42 | Missouri | SEC | 2,640 | 31,412 | 8.4% |
43 | Washington State | Pac-12 | 2,361 | 29,843 | 7.9% |
44 | Wake Forest | ACC | 703 | 8,947 | 7.9% |
45 | Mississippi State | SEC | 1,775 | 23,086 | 7.7% |
46 | Kansas | Big 12 | 2,035 | 26,780 | 7.6% |
47 | North Carolina | ACC | 2,377 | 31,733 | 7.5% |
48 | Utah | Pac-12 | 2,535 | 34,464 | 7.4% |
49 | Pittsburgh | ACC | 2,134 | 29,238 | 7.3% |
50 | Texas Tech | Big 12 | 2,927 | 40,542 | 7.2% |
51 | Northwestern | Big Ten | 1,631 | 22,933 | 7.1% |
52 | Minnesota | Big Ten | 3,474 | 52,376 | 6.6% |
53 | Boston College | ACC | 991 | 15,046 | 6.6% |
54 | Louisville | ACC | 1,441 | 22,140 | 6.5% |
55 | Appalachian State | Sun Belt | 1,309 | 20,641 | 6.3% |
56 | Syracuse | ACC | 1,364 | 21,772 | 6.3% |
57 | Virginia | ACC | 1,616 | 26,026 | 6.2% |
58 | Colorado | Pac-12 | 2,280 | 37,956 | 6.0% |
59 | Boise State | Mountain West | 1,545 | 25,830 | 6.0% |
60 | NC State | ACC | 2,189 | 36,831 | 5.9% |
61 | Duke | ACC | 1,047 | 17,620 | 5.9% |
62 | Cincinnati | Big 12 | 2,370 | 40,281 | 5.9% |
63 | SMU | American | 728 | 12,385 | 5.9% |
64 | UCLA | Pac-12 | 2,737 | 47,516 | 5.8% |
65 | Oregon State | Pac-12 | 1,892 | 33,193 | 5.7% |
66 | Tulane | American | 746 | 13,127 | 5.7% |
67 | California | Pac-12 | 2,563 | 45,435 | 5.6% |
68 | Purdue | Big Ten | 2,688 | 49,639 | 5.4% |
69 | Maryland | Big Ten | 2,149 | 41,272 | 5.2% |
70 | UCF | Big 12 | 3,619 | 70,406 | 5.1% |
71 | Wyoming | Mountain West | 588 | 11,479 | 5.1% |
72 | BYU | Big 12 | 1,592 | 34,802 | 4.6% |
73 | Indiana | Big Ten | 2,043 | 45,328 | 4.5% |
74 | Rice | American | 373 | 8,285 | 4.5% |
75 | UAB | American | 998 | 22,289 | 4.5% |
76 | Marshall | Sun Belt | 484 | 11,125 | 4.4% |
77 | Illinois | Big Ten | 2,450 | 56,607 | 4.3% |
78 | Western Michigan | MAC | 741 | 19,038 | 3.9% |
79 | Houston | Big 12 | 1,828 | 47,031 | 3.9% |
80 | Coastal Carolina | Sun Belt | 406 | 10,473 | 3.9% |
81 | Central Michigan | MAC | 597 | 15,465 | 3.9% |
82 | Toledo | MAC | 653 | 17,045 | 3.8% |
83 | Memphis | American | 768 | 21,622 | 3.6% |
84 | Rutgers | Big Ten | 1,790 | 50,804 | 3.5% |
85 | Arizona State | Pac-12 | 2,715 | 77,881 | 3.5% |
86 | Georgia Southern | Sun Belt | 913 | 27,091 | 3.4% |
87 | Hawai'i | Mountain West | 640 | 19,097 | 3.4% |
88 | Northern Illinois | MAC | 534 | 16,234 | 3.3% |
89 | South Alabama | Sun Belt | 452 | 13,992 | 3.2% |
90 | Louisiana Tech | Conference USA | 355 | 11,037 | 3.2% |
91 | Arizona | Pac-12 | 1,551 | 49,471 | 3.1% |
92 | Ohio | MAC | 753 | 24,429 | 3.1% |
93 | James Madison | Sun Belt | 659 | 22,166 | 3.0% |
94 | Jacksonville State | Conference USA | 269 | 9,238 | 2.9% |
95 | Troy | Sun Belt | 416 | 14,901 | 2.8% |
96 | USF | American | 1,232 | 44,322 | 2.8% |
97 | Louisiana | Sun Belt | 426 | 16,225 | 2.6% |
98 | Miami (OH) | MAC | 502 | 19,216 | 2.6% |
99 | Bowling Green | MAC | 457 | 17,645 | 2.6% |
100 | Temple | American | 897 | 35,626 | 2.5% |
101 | Connecticut | FBS Independents | 800 | 32,146 | 2.5% |
102 | Southern Miss | Sun Belt | 343 | 14,146 | 2.4% |
103 | San Diego State | Mountain West | 778 | 35,732 | 2.2% |
104 | ECU | American | 602 | 28,021 | 2.1% |
105 | Colorado State | Mountain West | 702 | 32,777 | 2.1% |
106 | Eastern Michigan | MAC | 328 | 15,370 | 2.1% |
107 | Fresno State | Mountain West | 492 | 24,585 | 2.0% |
108 | UTSA | American | 688 | 34,734 | 2.0% |
109 | Akron | MAC | 274 | 14,516 | 1.9% |
110 | WKU | Conference USA | 292 | 16,750 | 1.7% |
111 | Middle Tennessee | Conference USA | 359 | 20,857 | 1.7% |
112 | North Texas | American | 724 | 42,454 | 1.7% |
113 | Nevada | Mountain West | 348 | 21,034 | 1.7% |
114 | Utah State | Mountain West | 429 | 27,426 | 1.6% |
115 | Texas State | Sun Belt | 576 | 37,864 | 1.5% |
116 | Old Dominion | Sun Belt | 345 | 23,494 | 1.5% |
117 | Arkansas State | Sun Belt | 188 | 12,863 | 1.5% |
118 | Ball State | MAC | 270 | 19,337 | 1.4% |
119 | Buffalo | MAC | 447 | 32,332 | 1.4% |
120 | ULM | Sun Belt | 114 | 8,565 | 1.3% |
121 | Kent State | MAC | 338 | 26,597 | 1.3% |
122 | FAU | American | 374 | 30,155 | 1.2% |
123 | Charlotte | American | 376 | 30,448 | 1.2% |
124 | UMass | FBS Independents | 389 | 32,045 | 1.2% |
125 | Sam Houston | Conference USA | 247 | 21,679 | 1.1% |
126 | New Mexico State | Conference USA | 148 | 13,904 | 1.1% |
127 | New Mexico | Mountain West | 228 | 21,738 | 1.0% |
128 | UNLV | Mountain West | 313 | 30,679 | 1.0% |
129 | San José State | Mountain West | 330 | 37,133 | 0.9% |
130 | Georgia State | Sun Belt | 478 | 55,466 | 0.9% |
131 | UTEP | Conference USA | 184 | 24,003 | 0.8% |
132 | FIU | Conference USA | 217 | 56,732 | 0.4% |
133 | Liberty | Conference USA | 225 | 95,148 | 0.2% |
Here's conference summaries:
Rank | Conf. | Flair | Enrollment | Ratio |
1 | SEC | 78,617 | 490,310 | 16.0% |
2 | Big Ten | 83,114 | 630,405 | 13.2% |
3 | FBS Independents | 10,123 | 81,924 | 12.4% |
4 | ACC | 38,729 | 382,043 | 10.1% |
5 | Big 12 | 46,816 | 473,510 | 9.9% |
6 | Pac-12 | 37,271 | 497,453 | 7.5% |
7 | American | 10,139 | 331,828 | 3.1% |
8 | MAC | 5,894 | 237,224 | 2.5% |
9 | Sun Belt | 7,109 | 289,012 | 2.5% |
10 | Mountain West | 7,006 | 291,691 | 2.4% |
11 | Conference USA | 2,296 | 269,348 | 0.9% |
submitted by
rustybelts to
CFB [link] [comments]
2023.06.01 15:50 RhetoricalObsidian The U.S. states where residents have the biggest vocabulary
2023.06.01 14:46 ZandrickEllison [OC] Who is the best second banana? A ranking of the best sidekicks among all the 2000s title teams
We often hear the question: "Is Player X good enough to be the best player on a championship team?"
Less often, you hear: "Is Player Y good enough to be the
second best player on a championship team?"
It's time to give these second
bananas their due. We're going through the 2000s and ranking each SECOND best player on the title teams. Their values vary -- some were merely good starters, some were All-Stars, and some were arguably top 5 players in the entire league.
Ranking them isn't easy, but we're going to keep a few caveats in mind.
--- We're ranking based on the second banana's play during the course of THAT SEASON -- not their careers overall.
--- Statistics will be important, but not the be-all and end-all. After all, there's a big difference between stats from 2003 and stats from 2023. As a result, we may often defer to season accolades like "All-Star" or "All-NBA."
With all that said, here are my rankings, but feel free to disagree and explain your own ranks below.
THE BEST (title-winning) SECOND BANANAS of the 2000s
(23) Tyson Chandler, 2011 Dallas Mavericks
The 2010-11 Dallas Mavericks were probably the most unlikely champion of the 2000s, with Dirk Nowitzki and a cast of older veterans who were seemingly on the decline. At the time, Jason Kidd was 37, Caron Butler was 30, Shawn Marion was 32, and Peja Stojakovic was 33.
You can make the case for Jason Terry to be the second banana here. Terry averaged 15.8 points off the bench for the Mavs that year, which is more impressive when you consider the context. (teams averaged 99.6 PPG then, 114.7 PPG now). Terry also pumped his numbers up to 18.0 PPG in their stunning upset over Miami in the Finals.
Still, we'll give the slight nod to Tyson Chandler as the teams' second most impactful player overall. Chandler finished 2nd team All-Defense and his strong playoff showing helped spearhead his DPOY campaign the following season (for the Knicks). Either way -- whether you give the nod to Chandler, Terry, or Kidd -- this would rank at the bottom of our list. None of those players was flirting with All-Star status.
(22) Tony Parker, 2003 San Antonio Spurs
The Parisian Torpedo will be a frequent contributor to this list -- logging a record-setting 3 "second banana" awards for his contributions to the Spurs' incredible run.
Naturally, his first would be his least impactful. Back in 2002-03, Tony Parker was still only 20 years old and in his second season in the league. Still, he was probably their second best player after a prime Tim Duncan. He averaged 15.5 points and 5.3 assists (solid numbers for the era) and held his own against Jason Kidd in the Finals. Parker wouldn't be considered a star yet though -- his first All-Star appearance came three years later.
(21) Andrew Wiggins, 2022 Golden State Warriors
Golden State's title last year was their biggest surprise run, fueled by Steph Curry and a solid-but-unspectacular supporting cast. Among them, you could debate the virtues and flaws of the second bananas -- Draymond Green struggled offensively, Jordan Poole struggled defensively, Klay Thompson missed significant time coming back from injury.
Of that group, I'd suggest Andrew Wiggins was their most well-rounded and consistent second banana. He averaged 17.2 PPG and even made the All-Star team. Better yet, he became a "winning player." He scored more efficiently (39.3% from 3) and played better defense -- particularly in the Finals. That said, Wiggins was probably on the level of a "good starter" more than a typical All-Star. For that reason, we'll rank him below a few others who didn't make the All-Star team.
(20) Tony Parker, 2005 San Antonio Spurs
Tony Parker re-emerges on our list and climbs even higher now in his age-22 season. He still didn't make the All-Star team, but he upped his numbers to 16.6 points and 6.1 assists per game. Again, we have to remember that these averages look better when you factor in the points "inflation" of today. Overall, we'll give him a slight edge over rising Manu Ginobili (who averaged 16.0 PPG off the bench that year), although it's debatable. Of the two, Ginobili played better in the Finals against Detroit. Still, whether it's Parker or Ginobili, the second banana would rank around this same range.
(19) Kyle Lowry, 2019 Toronto Raptors
The Toronto Raptors finally broke through when they rented mercenary Kawhi Leonard for the year, but Leonard was backed up by a very strong supporting cast overall.
Among them, we're giving a slight nod to the old dog Kyle Lowry (then 32) over the rising star Pascal Siakam. Lowry felt like more of the heartbeat to the team. The numbers don't jump off the page (14.2 PPG), but he was a strong two-way player who averaged 8.7 assists and 1.4 steals per game.
(18) Khris Middleton, 2021 Milwaukee Bucks
We have another second banana debate here, although we're leaning to Khris Middleton over Jrue Holiday. It's easy for our memory to get foggy now that Middleton has struggled post injury, but he was a very good starter before that. He averaged 20.4 points, 6.0 rebounds, and 5.4 assists per game (not far behind Jrue Holiday's 6.1).
While the 29-year-old Middleton didn't make the All-Star team this season, he was an All-Star caliber player; in fact, he made the team both the prior year and the year after.
(17) Chauncey Billups, 2004 Detroit Pistons
We're giving the primary "star" designation to Ben Wallace here. While "Big Ben" only averaged 9.5 PPG, his defense was the Pistons' biggest differentiator. In 2003-04, Wallace won Defensive Player of the Year and even finished 7th in MVP voting.
Among the other starters, we're giving the nod to Chauncey Billups over Rip Hamilton and Rasheed Wallace. Hamilton had the slight edge in scoring (17.6 PPG to 16.9 PPG), but Billups led the team with 5.7 assists per game and tended to be their go-to guy offensively when need be. Sure enough, "Mr. Big Shot" would go on to win Finals MVP.
(16) Tony Parker, 2007 San Antonio Spurs
As Tim Duncan aged, Tony Parker got better and better. His best second banana season would come in 2006-07. Now age 24, Parker averaged 18.6 points and 5.5 assists per game and made the All-Star team. He shot less threes and relied more on his ability to drive and convert in the paint. He shot 52.0% from the field overall.
In the Finals, the Cleveland Cavaliers had no answer for Parker's scoring. He whipped them to the tune of 24.5 points per game (shooting 56.8% from the field in the process). Parker would win Finals MVP for his part in the sweep.
(15) Kawhi Leonard, 2014 San Antonio Spurs
For their last title, the San Antonio Spurs were more the sum of their parts than any one true star. Tim Duncan was 37, Manu Ginobili was 36. Tony Parker had probably graduated from second banana to their marquee player -- he was their leading scorer and lone All-Star that season.
After him, we'll call Kawhi Leonard their next best player. While Leonard wasn't a big name or big scorer yet (averaging 12.8 PPG), he still had a massive impact on winning. He was an efficient offensive player (shooting 52.2% from the field) and an excellent defender. The raw stats suggest that Leonard should be lower than this, but the advanced stats suggest he was already an elite player. Overall, his BPM of +5.0 led the team. We'll make the playoffs the tiebreaker, where Leonard stepped up his scoring and won Finals MVP. If you want to consider him the team's best player this year (which feels like a bit of revisionist history), Parker would rank around this same range.
(14) Kyrie Irving, 2016 Cleveland Cavaliers
Young Kyrie Irving (then 23) also gets a boost for his excellent playoff performance. In the Finals, Irving exploded for 27.1 points per game and helped the Cavs defeat the 73-win Golden State Warriors.
If you look at his 2015-16 as a whole, it gets harder to rank Irving much higher than this. He didn't play that great in the regular season; in fact, it may have been the worst of his career. He only played 53 games, only shot 32.1% from 3 (a career low), and only averaged 4.7 assists (also a career low). He also missed the All-Star game. In terms of peak performance, Irving was an excellent second banana (particularly for LeBron James), but if we gauge this exercise season-by-season he'd rank around middle of the pack.
(13) Pau Gasol, 2009 Los Angeles Lakers
Kobe Bryant rightfully gets the lion's share of credit for the Lakers' repeat from 2009-10, but history may forget how good Pau Gasol was when he arrived from Memphis to help out the cause.
Right in the thick of his prime at age 28, Gasol averaged 18.9 points, 9.6 rebounds, and 3.5 assists. His size, skill, and basketball IQ made him the perfect mind meld with Bryant. All in all, Gasol made the All-Star team and even cracked 3rd team All-NBA. He's the first "All-NBA" sidekick we've registered so far, which explains his lofty ranking.
(12) Pau Gasol, 2010 Los Angeles Lakers
The following year, Pau Gasol was arguably even better. He started to control the paint even more, registering 11.3 rebounds and 1.7 blocks per game. Once again, he made the All-Star team and 3rd team All-NBA. Between Gasol, Andrew Bynum, and Lamar Odom off the bench -- this Lakers unit may have had the best frontcourt depth in the 2000s.
(11) Shaquille O'Neal, 2006 Miami Heat
When Shaquille O'Neal first arrived from L.A., he immediately assumed the mantle of the star of the Miami Heat. That first year, he even finished 2nd in MVP voting.
However, by the next year (2005-06), Dwyane Wade had usurped that mantle. Now 33, O'Neal shifted into more of a supporting role. He still had a major impact -- averaging 20.0 points, 9.2 rebounds, and 1.8 blocks -- but became more of a second option as Wade tore up the playoffs. He appeared to slow down as the season wore on -- averaging just 13.7 PPG in the Finals.
Still, O'Neal's accolades this season rank highly -- he was an All-Star and 1st team All-NBA performer. For that reason, we're going to put him above some of the 3rd team All-NBA sidekicks. Still, you can argue against that as O'Neal was more on the level of a Pau Gasol than a true superstar at this point.
(10) Klay Thompson, 2015 Golden State Warriors
When we think about "sidekicks," you immediately think of someone with the skill set of Klay Thompson (then age 24). He took "3 and D" to the extreme -- nailing 43.9% from deep and contributing 1.9 "stocks" on the other end (1.1 steals, 0.8 blocks).
Like Pau Gasol, Klay Thompson made the All-Star and made 3rd team All-NBA that season. In fact, he even made an appearance on an MVP ballot and finished 10th overall in the voting. For a clear "sidekick," that's an impressive feat.
(9) Paul Pierce, 2008 Boston Celtics
Back in 2007-08, Danny Ainge wasn't cobbling together a team of a star + supporting sidekicks -- he was combining three stars who had gotten used to being "the man" in their previous stops. New arrival Kevin Garnett assumed the role as the alpha dog -- averaging 18.8 PPG, playing excellent defense, and finishing third in MVP voting.
Meanwhile, Paul Pierce and Ray Allen played the role of overqualified "Robins." Pierce averaged 19.6 points to lead the team, shooting 39.2% from three. Like our previous second bananas, he made the All-Star team and the 3rd team All-NBA. You also got the sense there was more in the tank when need be, as illustrated by his averaging 21.8 points and 6.3 assists in the Finals en route to Finals MVP.
(8) Dwyane Wade, 2013 Miami Heat
As we jump back and forth through time like a Chris Nolan movie, it may be hard to keep track of the ups and downs of these superstars. For this spot, we're talking about the Dwyane Wade of the "Heatles" days. In 2013, Wade was 31 years old, maybe a step past his prime, and a clear second banana to LeBron James.
Still, even in that role, Wade had a massive impact. In the regular season, he averaged 21.2 points, 5.1 assists, 1.9 steals, and 0.8 blocks. While he may have to take a backseat to LeBron James offensively, he utilized his athleticism to be a wrecking ball on the defensive end. Overall, he finished as an All-Star, 3rd team All-NBA, and even landed in 10th place in MVP voting.
(7) Kobe Bryant, 2000 Los Angeles Lakers
Again, let's pay attention to the timeline here. In the first three-peat of the Shaq and Kobe days, Kobe Bryant was only 21 years old and not at the peak of his powers. Make no mistake -- this was the Shaq Show early on. In the Finals, O'Neal averaged 38.0 points and 16.7 rebounds (more boards than Bryant had points with 15.6 PPG).
Despite that, Bryant was clearly a star player in his own right. He averaged numbers similar to 2013 Wade -- 22.5 points and 1.6 steals per game. He made the All-Star game, 1st team All-Defense, and 2nd team All-NBA, accolades that put him in this lofty spot on our rankings.
(6) Dwyane Wade, 2012 Miami Heat
We're toggling back to Dwyane Wade now -- in the year prior to our 8th place spot. In the Heatles' first title (and Wade's second overall), he was still 30 years old and arguably still in his prime. He averaged 22.1 points, 4.6 assists, and even better defensive numbers -- 1.7 steals and 1.3 blocks per game.
For his efforts, he was named to the All-Star team and to the 3rd team All-NBA. He also cracked the MVP voting again, finishing in 10th place once more. We're going to give him a slight edge on Kobe's first title year, but the two would be razor tight; they were both clearly top 10 players in the league at the time.
(5) Anthony Davis, 2020 Los Angeles Lakers
Say what you want about the COVID year, the bubble, and the "Mickey Mouse" championship, but Anthony Davis was a friggin' beast back in 2019-20. He averaged 26.1 points per game, keyed by his ability to get to the line and convert (84.6% shooting on 8.5 FTA per game). He caught fire in the playoffs, averaging a team-high 27.7 PPG with a 66.5% true shooting percentage.
Davis's defensive impact is what sets him apart from most other second bananas. He averaged 1.5 steals and 2.3 blocks per game, earning 1st team All-Defense and nearly winning DPOY. Overall, he made the All-Star team, 1st team All-NBA, and finished 6th in MVP voting. In terms of season accolades, that would be the best on our list so far.
(4) Kobe Bryant, 2001 Los Angeles Lakers
If 1999-2000 Kobe Bryant was still developing, he looked like a finished product by 2000-01. Now age 22, he was a dominant player on both ends. He averaged 28.5 points, 5.9 rebounds, and 5.0 assists, and made 2nd team All-Defense. Overall, this version of Bryant finished 2nd team All-NBA and finished 9th in MVP voting. That ranking would have probably been even higher had he not missed some time in the regular season (only 68 games played).
Looking back, you could see where some of the tension between Kobe Bryant and Shaquille O'Neal may have stemmed from. After all, it's not easy for a kid who put up 29-7-6 in the playoffs to accept being second banana forever.
(3) Kobe Bryant, 2002 Los Angeles Lakers
In the final year of the Lakers' three-peat, the 23-year-old Kobe Bryant had not only established himself as a superstar, but as one of the best players in the entire league. The numbers don't jump off the page -- 25.2 points, 5.5 rebounds, 5.5 assists -- but we have to adjust for the era and the role he played.
The league clearly knew his value. He made the All-Star team, 2nd team All-Defense, 1st team All-NBA, and finished 5th in MVP voting (two spots behind Shaquille O'Neal). He'd jump even higher the next year, overtaking O'Neal as the leading scorer (30.0 PPG) and the leading MVP candidate (3rd overall).
(2) Steph Curry, 2018 Golden State Warriors
Finally, we answered the question that had stumped basketball analysts for years: what would happen if you added a superstar to a team that won 73 games the year prior? Turns out, they'd be pretty good.
For our exercise, the bigger challenge is determining who the "second banana" would be between two recent MVPs Steph Curry and Kevin Durant. I'm going to split the difference and say it was Curry's team the first year (when KD coincidentally missed 20 games) and then got handed over to Durant the following year (when Curry missed 30 games).
Through that lens, we're going to study Curry in that second season. Still only 29, Curry was still squarely in his prime. He averaged 26.4 PPG on a sparkling 67.5% true shooting percentage. Even though he missed 31 regular season games, he still finished 3rd team All-NBA and 10th in MVP voting. You could even argue that he was the most impactful player in the NBA at the time. After all, he had won back-to-back MVPs a few seasons prior.
(1) Kevin Durant, 2017 Golden State Warriors
If we're calling Kevin Durant the "second banana" for the first year in Golden State, he'd rank as the best two-way sidekick in the 2000s. Remember, we're not debating "Kobe vs. Durant" in terms of career achievement here; we're ranking their single-season efforts in a supporting role. Unlike some of our other stars (like a young Kobe), Durant was squarely in his prime at age 28.
In the regular season, he averaged 25.1 PPG on stone-cold efficiency (65% true shooting). Also, outside of Oklahoma City's super-sized lineup, he showcased his ability to protect the rim as well -- blocking 1.6 shots per game. Despite missing 20 games in the regular season, he still finished 2nd team All-NBA.
More than that, Durant demonstrated his true upside in the playoffs and Finals. Matched up with LeBron James and a historically-underrated Cavs team, Durant averaged 35.2 points, 8.2 rebounds, 5.4 assists, 1.0 steals, and 1.6 blocks on godly shooting splits of 56-47-93 (a 69.8% true shooting). Durant was arguably the best player in the NBA that year -- and would be top 3 at minimum. For that reason, he ranks at our top spot.
follow up: where would Jamal Murray or Bam Adebayo rank?
This year's Finals may not be Adam Silver's dream, but it's a great one for this exercise. We rarely see a clearer "second banana" in the hierarchy like Jamal Murray for Denver or Bam Adebayo for Miami.
Ranking them among the second bananas would be a more difficult task. Coming back from injury, Murray didn't have a great regular season. He's still never made the All-Star team. Still, his ability to raise his game in the playoffs and make tough shots does feel reminiscent of young Kyrie Irving during that Cavs title run.
Alternatively, Adebayo has a great case as a two-way stud. He's not the type of "back you down" big that some people want him to be, but he can still score in the mid-range, he's an underrated passer, and he's obviously an exceptional and switchable defender. He made the All-Star team and second team All-Defense this year. Among our second bananas, he reminds me most of Pau Gasol during the Lakers run.
submitted by
ZandrickEllison to
nba [link] [comments]
2023.06.01 13:17 NevermoreSEA Notable Prospect Performances - May 31, 2023
Top 30 Prospect Performance Low-A Modesto
Prospect | Performance | Position | Age | Ranking |
Cole Young | 0-3, RBI | Shortstop | 19 | Mariners #3 |
Gabriel Gonzalez | 1-4, R | Outfield | 19 | Mariners #7 |
Josh Hood | 2-4 | Second Base | 22 | Mariners #29 |
High-A Everett
Prospect | Performance | Position | Age | Ranking |
Harry Ford | 1-3, 2BB | Catcher | 20 | Mariners #1 |
Tyler Locklear | 0-4, BB | First Base | 22 | Mariners #10 |
Axel Sanchez | 1-4, BB | Shortstop | 20 | Mariners #15 |
Alberto Rodriguez | 2-5, 2B, R | Outfield | 23 | Mariners #27 |
AA Arkansas
Prospect | Performance | Position | Age | Ranking |
Emerson Hancock | 4.0IP, 7H, 4BB, 7ER, 4K | Pitcher | 24 | Mariners #4 |
Jonatan Clase | 0-5 | Outfield | 21 | Mariners #12 |
Robert Perez Jr | 0-4 | Outfield | 22 | Mariners #21 |
AAA Tacoma
Prospect | Performance | Position | Age | Ranking |
Cade Marlowe | 0-5 | Outfield | 25 | Mariners #16 |
Zach DeLoach | 2-5, R | Outfield | 24 | Mariners #26 |
Unranked Excellence
Prospect | Performance | Level | Age | Positon |
Jordan Jackson | 6.0IP, 5H, 0BB, 1ER, 2K | High-A | 24 | Pitcher |
Logan Rinehart | 2.0IP, 0H, 0BB, 0ER, 4K | High-A | 25 | Pitcher |
Logan Warmoth | 3-4, HR, R, RBI | AA | 27 | Shortstop |
Pat Valaika | 2-3, HR, 2R, 3RBI | AAA | 30 | Second Base |
Final Scores
Stockton defeats Modesto 4-2
Everett defeats Hillsboro 3-2
Midland defeats Arkansas 9-2
Tacoma defeats Sacramento 7-6
Highlights
Emerson Hancock strikes out one.
Standings
Affiliate | Record | Standings | Diff | Level |
Modesto Nuts | 23-24 | 3rd in division | -6 | Low-A |
Everett AquaSox | 24-23 | 4th in division | +14 | High-A |
Arkansas Travelers | 29-18 | 2nd in division | +41 | AA |
Tacoma Rainiers | 26-27 | 2nd in division | +8 | AAA |
Prospect Performances Index.
submitted by
NevermoreSEA to
Mariners [link] [comments]
2023.06.01 12:00 AutoModerator Daily r/LawnCare No Stupid Questions Thread
Please use this thread to ask any lawn care questions that you may have. There are no stupid questions. This includes weed, fungus, insect, and grass identification. For help on asking a question, please refer to the "How to Get the Most out of Your Post" section at the top of the sidebar.
Check out the sidebar if you're interested in more information on plant hardiness zones, identifying problems, weed control, fertilizer, establishing grass, and organic methods. Also, you may contact your local Cooperative Extension Service for local info.
How to Get the Most out of Your Post: Include a photo of the problem. You can upload to imgur.com for free and it's easy to do. One photo should contain enough information for people to understand the immediate area around the problem (dense shade, extremely sloped, etc.). Other photos should include close-ups of the grass or weed in question: such as this, this, or this. The more photos or context to the situation will help us identify the problem and propose some solutions.
Useful Links: Guides & Calculators: Measure Your Lawn • Make a Property Map • Herbicide Application Calculators • Fertilizing Lawns • Grow From Seed • Grow From Sod • Organic Lawn Care • Other Lawn Calculators
Lawn Pest Control: Weeds & What To Use • Common Weeds • What's Wrong Here? • How To Spray Weeds • MSU Weed ID Tool • Is This a Weed? • Herbicide Types • ID Turf Diseases • Fungi & Control Options • Insects & Control Options
Fertilizing: Fertilizing Lawns • How To Spread Granular Fertilizer • Natural Lawn Care • Fertilizer Calculator
US Cooperative Extension Services: Arkansas - University of Arkansas • California - UC Davis • Florida - University of Florida • Indiana - Purdue University • Nebraska - University of Nebraska-Lincoln • New Hampshire - The University of New Hampshire • New Jersey - Rutgers University • New York - Cornell University • Ohio - The Ohio State University • Oregon - Oregon State University • Texas - Texas A&M • Vermont - The University of Vermont
Canadian Cooperative Extension Services: Ontario - University of Guelph
Recurring Threads: Daily No Stupid Questions Thread • Mowsday Monday • Treatment Tuesday • Weed ID Wednesday • That Didn't Go Well Thursday • Finally Friday: Weekend Lawn Plans • Soil Saturday • Lawn of the Month • Monthly Mower Megathread • Monthly Professionals Podium • Tri-Annual Thatch Thread • Quarterly Seed & Sod Megathread
submitted by
AutoModerator to
lawncare [link] [comments]
2023.06.01 12:00 BM2018Bot Daily Discussion Thread: June 1, 2023
We are looking for new mods!
As our community grows, we need additional moderators to help us with day-to-day comment moderation, and with helping this community become even more effective as a resource to help people win elections. We'd love someone with social media experience to help us expand our reach there, but we are also in need of general content moderators.
If you're interested in applying to be a moderator,
you can do so here. Please let us know in this thread or
via modmail if you have any questions!
Check out our weekly volunteer posts and our volunteer from home spreadsheet, and help take back America at every level!
And don't forget to sign up for some exciting projects we're working on:
Introducing Campaign Central: a VoteDem VAN alternative project to help local campaigns organize!
Running for office is a major undertaking, and like any great journey, the first step is often the hardest. Our goal at VoteDem is to lower that barrier by developing a campaigning tool we can freely offer as a service to Democrats across the country. Campaign Central is a web-based platform that can load voter registration data, organize phone banking, text banking, canvassing, and much more! But to run this project, we need your help. We need volunteers to collect and upload voter registration data once per month (instructions provided).
Sign up for a state
Interested in adopting a state? Send us a modmail.
Let’s make sure that the GOP knows the true power of grassroots action!
submitted by
BM2018Bot to
VoteDEM [link] [comments]
2023.06.01 09:48 SquibblesMcGoo [Eurovision] The Dark Horse, the Powerhouse and the Great Nordic War of 2023 (Or When the Winner of a Song Competition Made the Audience Revolt)
Ah, Eurovision season. The time of hype, music, unity – and a shit ton of drama. This year’s winner is maybe one of the most controversial we’ve ever had, and that’s saying something considering we've had broadcasters straight up end a broadcast because they didn't like the winner (don't ask).
But I’m getting ahead of myself. Let’s start from the basics:
What is Eurovision?
Eurovision (or ESC) is an annual song contest originating from Europe, organized by the European Broadcasting Union (EBU). It’s been held without fail since 1951 aside from 2020, when it was cancelled for truly mysterious reasons. (COVID, guys, it was COVID). Originally incorporating only European countries, the contest has grown in its scope since its early days, nowadays having almost 40 countries participating (including some decisively non-European countries like Israel and Australia) and reaching a viewership of 150+ million, making it measure up to live events the likes of Super Bowl.
The concept of the competition is simple: each country sends one original song to compete. Aside from the biggest sponsors of the contest (UK, Spain, Italy, Germany and France) and the winner of the previous year (Ukraine in ESC 2023) each entry participates in a semi-final, from which 20 countries are selected to advance to the final via voting. The winner of the competition is the act that gets the most votes in the final, who then gets the right to host next year’s contest and enjoy the tourism money. This year, UK took over the hosting duties from Ukraine for reasons (the war, guys, it was the war).
The voting system is important to understand for context: each country gives two sets of points, both equal in value and weight. Points are given to the top 10 entries, 10th getting 1 point, 9th two points and so on. Third place gets 8 points, second place gets 10 points and first place gets 12 points to make the top two positions more valuable. The two sets of votes come from professional juries and the televote. Countries can’t vote for their own entry, naturally.
Juries are 4-5 member teams consisting of music professionals (artists, producers, managers, vocal coaches, music reporters, radio DJs, choreographers etc.) who appraise each entry based on the following criteria:
- Composition and originality of the song
- Performance on stage
- Vocal capacity of artist(s)
- Overall impression of the act
Televotes are collected by having viewers vote via the official Eurovision app, or by calling/texting. A person/device can give a maximum of 20 votes and each vote costs money, the amount depending on each individual country, but it usually hovers somewhere around 1€/vote. Yes, I blew 20€ on the grand final. Yes, I blew another 20€ on the semifinal I was allowed to vote in (there are two semifinals and you can only vote in the semi your country’s in).
This year's Grand Final was held on May 13, but things start happening way before that. Each Eurovision season typically starts with countries selecting their representatives. Some use internal selection (as in, broadcasters decide who goes all by themselves) but most host national finals, competitions where the winner is granted the golden ticket to Eurovision. These national finals are keenly followed by eurofans (passionate fans of Eurovision).
Ready Player One: UMK 2023 and the Launch of the Dark Horse
In January 2023, UMK, the Finnish national final for Eurovision, started revealing its finalists. Seven finalists were announced, and their songs were released one by one on a once-a-day schedule. Finland’s journey in Eurovision
has historically been poor, having only managed to secure one win (granted, there are many who have never won) and often finishing on the back end of the results.
Since 2020, however, UMK went under new management and did what was pretty much a 180: in a few years, it became one of the highest quality national finals around, and because of that, many eyes were on Finland when UMK started. The overall quality of songs in 2023 was very good, but one emerged as a clear frontrunner.
Käärijä, a Finnish rapper who was virtually unknown even in his home country, entered the competition, as the kids say, guns ablaze and mad as hell. His song
Cha Cha Cha, a rap/metal/techno fusion song that does a complete tonal and genre shift halfway through, immediately became the fan favourite to win. When the time came, Käärijä
absolutely landslided the national final, getting more points than the three runners up put together.
Hopeful buzz started amongst the eurofans; would this finally be Finland’s time after seventeen years (Finland’s last and only win was in 2006)? Finland is by means not the most beloved country in Eurovision, but many see it as an underdog that’s finally catching up to speed. Many wanted it to do well. Some were cautiously optimistic.
That was, until Sweden entered the competition, as the kids say, guns ablaze and mad as hell.
Ready Player Two: Melodifestivalen 2023 and the Awakening of the Sleeping Giant
Sweden, by all possible metrics, is one of, if not THE most
successful country in Eurovision history. Before 2023, they’ve raked in a massive six wins (second only to Ireland who has seven), two of which during the last 11 years alone and the last one as recently as 2015. Additionally, on years they don’t win, they place in the top 10 almost without fail. They have only failed to qualify from the semifinals once, and it was largely seen as a national disgrace.
Sweden takes Eurovision VERY seriously, and it shows in their results. Melodifestivalen, Sweden’s national selection, started gathering curious eyes even before it started, because rumours were murmuring of someone very remarkable returning on stage. These rumours turned out to be true.
It’s hard to overstate how iconic Loreen is to the Eurovision community. She won the competition back in 2012,
with a song that’s widely regarded as the best winning song of all time. She’s beloved and for a good reason. Known as a passionate, skilful vocalist and a world-class performer, the moment her participation was confirmed, many considered Melodifestivalen 2023 a done deal.
It must be mentioned that Loreen did attempt to return to Eurovision
once between her win in 2012 and entry in 2023, but failed to win Melodifestivalen. However, this year’s entry was not here to play. She entered with
Tattoo, a pop epic crafted by some of the best songwriters Sweden has to offer, with a staging so impeccable it could pass for a music video.
Critics and audience alike were raving. She was back, more powerful than ever. Expectedly, she won Melodifestivalen and earned her place in the line-up of 2023. In the community, the buzz was immediate, but not all of it was positive.
Sweden and Eurovision: A Turbulent Relationship
I think it’s fair to say that Sweden is, for the lack of a better term, suffering from success. Lately, there has been a somewhat anti-Sweden mentality brewing in the community, stemming from a few key criticisms Sweden regularly gets
- Genre loyalty: Swedish entries generally all fall under the umbrella of “radio friendly pop”. They’re well composed, well produced but seemingly leave the fandom cold. “Generic”, “soulless” and “safe” are terms often thrown at Swedish entries
- Jury bias: For a while now, Sweden has done better with juries than the televote, the difference once notoriously being as massive as 220 points, or 2nd place (jury) vs 22nd (out of a possible 26) (by the public). That being said, it’s disingenuous to say the televote hates Sweden as they regularly rank in the top 10, but it’s hard to deny that their point tally routinely consists of more jury than televote points
- Same songwriters: Melodifestivalen has been quite frequently criticized for having a large chunk of its songs written by the same core group of ten-ish people. In Sweden’s defence, a country of 10 million does not have that many active songwriters, but it’s hard to deny it’s a striking detail. For instance, in this year’s final, Melodifestivalen didn’t have a single entry that didn’t have at least two of these songwriters credited
This has led to things souring between Sweden and eurofans. To sum it up concisely: many eurofans feel like Sweden never takes risks, sends ungenuine lab-crafted jury baits and is always rewarded for it no matter what the viewers do because the juries always have Sweden's back. There's a lot of intricacies that go into this and there's nuance to this criticism, but for the sake of keeping things concise, I won't go into them now, all you need to know is that this is something that's going on.
“I love Loreen, but…”
Because of this sentiment, while Loreen undoubtedly had her fans, a sizeable section of the fandom started being critical of her. People started negging. Her song was called generic and soulless, the fact it was written by a huge group of the “regulars” in Melodifestivalen was brought up. People said it was too similar to her 2012 winning song, a 2.0 or carbon copy if you will. Some people also thought that because she already won, coming back was unfair since she already has a degree of Eurovision fame that could affect the results.
As soon as Loreen was announced as the Swedish representative, the competition took on a narrative of its own. It was widely seen as a race between Finland and Sweden. While Loreen definitely had her fans, the overall mentality was leaning more towards Käärijä. He was seen as the underdog from the country that has a winning chance once every 20 years, if that, going up against the Eurovision powerhouse Sweden who wins so often the fandom is getting tired of it.
That’s not to say no other entries were ever in the talks:
Spain’s artsy fusion flamenco song was seen as a potential jury darling.
France’s sassy chanson was seen as a potential sleeper hit.
Norway’s TikTok viral Viking techno banger was seen as a potential televote magnet.
Ukraine was still a big unknown given that the previous year, they had received the largest televote tally in the history of the competition and many thought sympathy votes would keep pouring in this year as well. And then there’s whatever the fuck
Croatia was doing (okay, they never had a chance of winning, I just wanted any excuse to subjugate people to this chaos).
But the overall sentiment was heavily leaning towards this being a neighbour war between Finland and Sweden. As the press and pre-parties (fan arranged concerts where artists are invited to perform to get their first interactions with the fandom) started, eyes were undeniably on Loreen and Käärijä.
During his Eurovision journey, Käärijä became somewhat of a crowd darling and went moderately viral on TikTok. A little guy with a bowl cut and a thick accent who had quickly gotten the reputation of being both funny and extremely friendly, coming to the competition with an out of the box and blatantly flamboyant genre fusion banger, walking around in a green bolero with no shirt. It's hard not to feel endeared. (Not that Loreen was unfriendly or anything, she’s perfectly nice by all accounts, but her off-stage personality wasn’t as much of a focal point as it was for Käärijä who became so beloved he was locked in as an icon even before the competition began).
Finns, they, well… Rallied behind Käärijä like crazy. Their government officials sent tweets wishing him good luck. The state owned railway company
dressed its statues as Käärijä. The Helsinki tram
got a Käärijä makeover. Cha Cha Cha topped the Finnish charts for ages (and still does AFAIK). The Finnish press was going gaga, broadcasting how only Loreen stood in the way of Käärijä’s victory.
“Just Ignore Everyone”: The Main Event That Undeniably Shaved a Few Years Off Of Graham Norton’s Life Span
The main event came about at the Liverpool Arena. As expected, both Sweden and Finland qualified for the final (later revealed to have come second and first, respectively). As the grand final came about, what was supposed to be a fun event (ironically carrying the slogan “United by Music”) turned into a rather tense occasion. Sweden performed 9th whereas Finland performed 13th. Both of their performances went largely well.
During
Finland’s performance, the crowd went so crazy some commentators even said the whole building was shaking. People shouted Cha Cha Cha at the top of their lungs. The audience was on his side. Not that
Loreen’s performance was poorly received either, she clearly had a lot of friends at the arena, but Finland got the audience
by the balls.
After all of the 26 acts were done performing, the time for vote announcements came. The structure of vote announcements goes as follows: first, each country gives their jury points one by one, their spokesperson saying out loud the country that got 12 points, the highest one possible. After that, the total televote points given by all countries are given to each act one by one starting from the country currently at the last position.
Very soon, it became obvious that the juries had taken an immense liking to Tattoo.
Loreen got 12 points after 12 points, and the atmosphere at the arena shifted. The audience got more and more agitated with each 12 points Sweden received, and cheered very loudly whenever Käärijä (who was expected to do significantly worse with the juries thanks to non-mainstream genre and his lesser singing abilities due to being a rapper first and foremost) got any points. It got to a point where they responded to Sweden getting 12 points by chanting Cha Cha Cha.
The hosts (Graham Norton and Hannah Waddingham) were getting visibly uncomfortable and
had to calm the crowd more than once. Hannah Waddingham eventually gave the exasperated yet iconic one-liner “just ignore everyone” when the chanting wouldn’t calm down. In the end, Sweden was comfortably in 1st place, having raked in a massive and historic 340 points, almost double that of the runner up Israel (who got 177 jury points). Finland ranked 4th with the juries with a total tally of 150, nearly 200 points behind Loreen.
Once the time for televotes came, everyone’s eyes were on Finland. Käärijä was expected to do well, but no one could quite gauge how well he’d do. Turns out, very well. He raked in a
massive 376 televote points, getting the full marks from 18/37 countries and not placing lower than 5th with any country. To put it in perspective, this is the 2nd highest televote score ever (by percentage of available points), the highest being Ukraine from the year prior, and the circumstances were quite unprecedented.
By then, it was obvious the two-horse race had become true. Loreen would need 189 points (roughly the 3th-4th place in televotes) to secure her win, a tally that wasn’t a walk in the park, but was very doable with her popularity.
The following sequence is still very bizarre to me. Loreen’s points were announced.
She got 243 points, making her the televote runner up. Which in turn meant Käärijä had lost to her by about 50 points despite outdoing her televote score by 133 points. As the winner was announced, Käärijä buried his head in his hands, clearly devastated. Loreen was immediately guided back on stage for her winner’s reprisal.
Footage from backstage shows many contestants beelining for Käärijä to comfort him. They’re seen hugging him,
chanting Cha Cha Cha like he’s the actual winner and trying to cheer him up. All the while, Käärijä himself was obviously heartbroken. The crowd wasn’t happy, to a point where when Loreen accepted the trophy,
she asked if anyone even wants her to perform again.
While Loreen’s fans were ecstatic to see her win and perform again, a portion of the audience reportedly walked out, disappointed. That was the end of the main competition. Sweden had won its 7th Eurovision trophy, catching up to Ireland for most wins ever. Loreen had become the second person (and first woman) in history to win twice.
The fandom, while disappointed, quickly got over themselves and accepted the outcome- yeah no one’s buying this lmfao. The dust was up in the air and wouldn’t settle for a good while.
Let the Shit Slinging Begin: Conspiracy Theories, Petitions and the Media Fight
The outcome received immediate backlash. Loreen’s winning performance and grand final performance were mass downvoted on YouTube. Loreen’s
victory post on
Eurovision currently has 0 upvotes and over 6500 comments.
Social media posts by Eurovision about Loreen were spammed by people proclaiming Käärijä was the real winner. Some contestants (namely Slovenia, Estonia and Serbia) outright said Käärijä was their winner. Finland’s grand final performance views also surpassed that of Sweden’s.
There was a lot of shit slinging. Conspiracies started rearing their heads. Some were
convinced Sweden had rigged the jury in order to host Eurovision on the 50th anniversary of ABBA’s victory (yes, ABBA is Swedish, yes, they won Eurovision with Waterloo, no there’s no proof of this conspiracy). A
petition was started to remove the juries and it reached 60 000 signatures in two days. Loreen was
accused of plagiarizing at least two different songs (not that I personally think the accusations have any merit, the melody line is just incredibly common). The Norwegian delegation
outright said the juries should be overhauled (Norway got screwed over massively by the juries, being placed 17th, only to be pulled to the 5th overall position by the televote).
When detailed televote results came out, it turned out Sweden had not placed 1st in a single country. It also had less 2nd places than Finland, and its average position was 5th (which coincidentally was the lowest score Käärijä got in any country). People were pissed. Some proclaimed spending money on voting is a waste of time if the 2nd highest televote score in history isn’t enough to win because a group of 200 or so people said so.
People started going through the jury credentials, soon discovering that they were overwhelmingly pop professionals (
55% to be exact) while rock pros were nowhere to be seen (they made up 3.8% of the jury to be exact). To be fair, people weren’t only pissed for Finland, they were pissed for other entries that seemingly ticked all the boxes for the juries just to get a minimal result because Sweden vacuumed all the points like it was time for spring cleaning. (I feel like I must mention that a lot of televote magnet entries also flopped hard because Finland suckled up most of the televote points leaving the rest to fight for scraps.)
With the televote results also came a peculiar detail that kicked the drama between Sweden and Finland to a whole new sphere. Turns out, every country gave Sweden televote points, except one. Yep, you guessed it. Finland blanked Sweden, while Sweden’s televote gave Finland the full 12 points. (Finnish and Swedish juries gave each other 12 points.)
This was seen as unsportsmanlike and the Swedish media latched onto it. Think pieces started coming out. One infamous
Swedish Eurovision podcast episode hosted by a Swedish newspaper consisted mostly of ranting about how Finland is a "country of idiots", how it's impossible Finns could genuinely have thought 10 other songs were better than Tattoo and how it was a testament to their lack of taste that they voted for
Germany and not Sweden (Germany came in last, Finland was one of the only countries to give them points. Germany sent a metal entry so I’m not sure why this was a surprise, Finns LOVE metal).
Swedish newspapers also
widely reported that the Finnish Eurovision commentator had told Finns not to vote for Sweden, furthermore adding fuel to the fire. This seems to mostly be lost in translation/a cultural miscommunication, the commentator in question read a joke out loud from the stream chat that essentially said “you’re allowed to vote tactically but not for your own country”, joking about the general elections held in Finland just months prior, where a lot of people voted tactically for the largest left-wing party to prevent the largest right-wing party from taking over. It didn’t work but "vote tactically" became a nation wide meme. Said commentator also simultaneously came under fire by Finns for stanning Loreen too much during his commentary. Man just can't win lmao
One Swedish newspaper article evoked strong backlash in Finland by
referring to Finland as “östra rikshalvan” (“Eastern part of the Kingdom”, roughly translated) which was the term used for Finland when it still belonged to Sweden. Many Finns saw it as colonialist and like Sweden was implying they were entitled to their former vassal using their money to give them points. However, it’s difficult to deny this lack of points likely was tactical from Finland, given how they’ve given Sweden points every other year except this one. The Finnish media also did broadcast heavily that Loreen's win depends on the amount of televotes she gets compared to Käärijä, so it's not far-fetched at all that Finns were aware of it and voted for something else.
Finnish press wasn’t silent either. A
widely publicized clip from a gossip radio show hosted by the teen targeted state-owned radio station Yle X3M heavily criticized Loreen’s entry, calling it “shit” and making a tasteless joke implying Loreen was on drugs the whole night thanks to her somewhat ethereal demeanour. One of the hosts also seemed convinced the results were rigged. Newspapers also eagerly reported about the plagiarism allegations against Tattoo, even if they never went as far as suggesting there’s any merit to them.
Perhaps the saddest part of this is the contestants themselves. Loreen and Käärijä both have consistently praised each other. They reportedly get along great and
there are numerous clips of them hugging, laughing and joking around. Despite taking the loss heavily, Käärijä congratulated Loreen and emphasized he loves her and wishes her all the best from
the very first interview he gave after his loss. (He did however say he feels like the jury system might need a reform.) Likewise, Loreen
said in an interview that she wasn’t bothered by the crowd chanting Cha Cha Cha because she thinks Käärijä is awesome and authentic.
They’re still in contact and are planning to meet up for coffee when Loreen’s next in Helsinki. The abuse Loreen herself received reached downright disgusting proportions, crossing from general trashing to misogynistic and even racist territory (because of her Moroccan heritage). It got to a point where Käärijä
had to address it on Finnish morning TV, emphasizing that the results are not her fault and that he feels horrible for her when people insult her because he knows her and knows she’s a lovely person. By all accounts, there’s no bad blood between them (or any contestants for that matter, this year was remarkably cordial).
So, where are we now? People have mostly calmed down (mostly) and accepted the results. Many still push for a jury reform, demanding larger juries with more diversity and knowledge of non-mainstream genres, a shift to a 60/40 voting split in favour of the televote, and many other things too numerous to list here. EBU has not addressed the controversy in any shape or form (and they likely won't), and we’ll likely have to wait until next year to find out if the jury system will be overhauled. Loreen and Käärijä fans are still bickering amongst each other but the general public seems to have moved on. Loreen is currently enjoying very good streaming numbers and chart placements across the world, and a
record number of Eurovision entries are charting. Käärijä isn't doing half-bad either, being
greeted by an airport full of supportive Finns upon his return and having skyrocketed to undeniable legendary status in the Finnish music scene.
Here’s to hoping Käärijä’s invited to perform at Eurovision 2024 as an interval act and regardless of jury reform (or lack thereof) people can bury this hatchet and Nordic unity can blossom once again. (Nordics get along great... Until one loses a competition to another, then it means war.)
submitted by
SquibblesMcGoo to
HobbyDrama [link] [comments]
2023.06.01 09:38 BruteSentiment Daily Minors Quick-Notes 5/31/23 - Whisenhunt is Back in Peak Form
| Carson Whisenhunt went back on a tear after his weakest game of the year, and is looking as strong as ever as one of the Giants top pitching prospects. Meanwhile, there were quite a few debuts (and one second game) across the system that left an impact on their games. AAA: Tacoma 7, Sacramento 6 (10 Innings) Link https://preview.redd.it/spb717y51d3b1.jpg?width=2496&format=pjpg&auto=webp&s=487aa897d804917af99baa6c853d4c9efccc333a Sacramento Notes: - Sacramento blew a couple of leads to lose this game. Sacramento had a thin 5-4 lead going into the 9th, and an error set up a 2-out infield single that let Tacoma tie up the game. That sent the game to the 10th, where a Luis Matos single gave the River Cats a 6-5 lead in the top of the 10th. But in the bottom half, with the ghost runner on, Melvin Adon hit a batter with a pitch, and a steal put runners on 2nd and 3rd, which set up a 2-run walkoff single with two out to end the game.
- David Villar led the River Cats, hitting their only extra-base hit by going 3-for-6 with two strikeouts and a home run to give him three RBI. After nine games, Villar is batting .294 with a double and three home runs.
https://twitter.com/RiverCats/status/1664085854267256832?s=20 https://twitter.com/RiverCats/status/1664094525441163264?s=20 - Luis Matos was 3-for-5 with three singles, a walk, two strikeouts, two stolen bases, and a caught stealing. It’s Matos’ second straight 3-hit game, and fourth in 13 games at Sacramento. Matos has a batting line of .362/.413/.483 in Sacramento, with four steals in five attempts, all coming in the last two games.
https://twitter.com/RiverCats/status/1664111450011447300?s=20 https://twitter.com/RiverCats/status/1664121043852886016?s=20 - Another good game for newcomer Jacob Nottingham, who was 2-for-3 with a walk, a HBP, and a stolen base. In three games with Sacramento, Nottingham has gone 6-for-11 with a double, with a walk to two strikeouts.
- Starter Sean Hjelle had a solid game. He gave up three runs, two earned, in 5.0 innings, with six hits, a walk, and three strikeouts. Hjelle has a 3.00 ERA in Sacramento after six games, but with just 17 strikeouts to five walks in 24.0 innings.
- Cole Waites delivered his seventh straight scoreless game, giving up a hit and a walk with two strikeouts. Waites has lowered his ERA from 9.31 to 5.29 over that span, and has increased his strikeouts, with two in each of the last two games. Over the seven games, he has struck out five to two walks in 7.1 innings
- Joey Marciano had a tough game, blowing the save and getting two unearned runners on his record, with his ghost runner in the 10th scoring after he left, on two hits and no walks. Marciano has a 5.88 ERA on the season, with 32 strikeouts to 26 walks in 26.0 innings of work.
AA: Erie 10, Richmond 3 (10 Innings) Link https://preview.redd.it/j09lpkm61d3b1.jpg?width=2496&format=pjpg&auto=webp&s=21ee542034dd902d46f260be0fd1a08d17545a6d Richmond Notes: - Woof…a tight game turned into an extra-innings blowout. Marco Luciano had a 1st inning to put Richmond up 1-0, and the teams traded leads until it was 3-3 after six innings. It stayed that way and went into the 10th inning, but Erie scored seven runs, many of them unearned thanks to a one-out error, and there was no hope for Richmond after that.
- It was another strong game for Vaun Brown, who went 3-for-5 with a home run and a double. It was his second of each in Double-A, coming in his eighth game at the level. Brown has a Richmond batting line of .355/.444/.677, with two doubles, a triple, and two home runs.
https://twitter.com/GoSquirrels/status/1664060714133209091?s=20 - Marco Luciano hit his 5th home run, going 2-for-4. It’s his second 2-hit game out of the last three. His batting line is now at .185/.292/.432 after 23 games, slowly inching up after a very slow start.
https://twitter.com/MiLB/status/1664043360519323648?s=20 - Left fielder Carter Aldrete went 2-for-4 with two strikeouts and a double, and picked up a steal. Aldrete had a 9-game hitting streak end on Tuesday, so he got back on the hump. He now has nine doubles and six home runs in 43 games.
https://twitter.com/GoSquirrels/status/1664052439031447552?s=20 - Making his Double-A debut was Jimmy Glowenke, but it didn’t go that well. Glowenke was 0-for-4 with two strikeouts, and also made a key 10th inning error while playing second base. Glowenke hit .313/.413/.542, with ten doubles and three home runs.
- Starting pitcher Landen Roupp had another strong, though short, start. In his fifth start of the season, Roupp struck out a season-high seven in 3.0 innings, allowing a run on three hits and a walk. Roupp has a 2.31 ERA with 19 strikeouts to four walks in 11.2 innings, and is holding batters to a .195 batting average.
https://twitter.com/GoSquirrels/status/1664045925117861888?s=20 - Juan Sanchez was the best of the relievers, striking out two in 2.0 scoreless innings, allowing two hits and a walk. Sanchez has 2.28 ERA with 25 strikeouts to eight walks in 23.2 innings.
High-A: Eugene 8, Vancouver 1 Link https://preview.redd.it/6ofv7wd71d3b1.jpg?width=2496&format=pjpg&auto=webp&s=387929d79a4c2df10162bd48812b5e4d7761aa5c Eugene Notes: - A pitching duel rather suddenly became a blowout for Eugene. Carson Whisenhunt set the tone, with 5.0 shutout innings, allowing just one baserunner with a hit allowed, and struck out seven. Eugene scored a run in the 1st, and it stayed 1-0 until Eugene got a second in the 7th. But in the 8th inning, the team scored six runs to put the game away.
- It was a great bounceback start for Carson Whisenhunt, who responded to his worst game at High-A with his best (at least, with one walk less than the runner up). After six games with Eugene, Whisenhunt has a 1.42 ERA, and has 36 strikeouts to eight walks in 25.1 innings, and is holding batters to just a .107 average.
- José Cruz didn’t allow any baserunners in his 2.0 innings, striking out four. That’s five straight scoreless games for Cruz, with 17 strikeouts to two walks and two hits in 9.0 innings.
- In his second High-A game, Carter Howell led the team offensively, going 4-for-5 with a double and a triple. So far he’s 5-for-9 with the double and triple, and no walks or strikeouts at his new level.
- Victor Bericoto went 2-for-3 with a walk and a sacrifice fly. Bericoto’s batting line sits at .297/.342/.485 through 43 games, with 15 walks and 37 strikeouts. He has eight doubles, a triple, and seven home runs.
- Center fielder Grant McCray went 2-for-4 with a walk and a strikeout. It’s been a good May for McCray, who has a batting line of .295/.395/.514 on the month, with five doubles and six home runs, and 18 walks against 35 strikeouts.
- Aeverson Arteaga was 2-for-5, having been moved down to the number six spot after Howell’s arrival. Arteaga’s May isn’t as strong, but he had a solid .233/.312/.393 line with six doubles, two triples, and four home runs, and 13 walks to 26 strikeouts.
Low-A: San Jose 8, Fresno 5 Link https://preview.redd.it/wnurzq981d3b1.jpg?width=2496&format=pjpg&auto=webp&s=9ffab30b59e4ce404a8fee10ae77669f8da2feca San Jose Notes: - San Jose just had too many hits and overwhelmed Fresno for this win. San Jose didn’t have any home runs out of 12 hits, but had three triples and three doubles en route to this win. And it was highlighted by shortstop Anthony Rodriguez having just about a perfect day in his Low-A debut.
- Shortstop Anthony Rodriguez made his Single-A debut after an injury in the spring delayed him, and he went 3-for-3 with a HBP, with two doubles and a triple, although he also had two errors. The 20-year old spent the last two seasons in the Arizona Complex League, combining the two years to have a batting line of .238/.345/.371 with 13 doubles, a triple, and ten home runs.
https://twitter.com/SJGiants/status/1664107087432142849?s=20 - Onil Perez was at DH, going 2-for-4 with a triple. Perez has a batting line of .313/.391/.417 with five doubles, two triples, and one home run, and 14 walks to 12 strikeouts.
- Left fielder Tanner O’Tremba was 1-for-5 with his second triple of the season. O’Tremba now has 11 doubles, two triples, and three home runs in 37 games, for a batting line of .272/.387/.449.
https://twitter.com/SJGiants/status/1664110406930157569?s=20 - Third baseman Andrew Kachel was 2-for-5 on the day. He’s had a great May, where he had a batting line of .348/.421/.576 with six doubles and three home runs.
- Another debut was outfielder Turner Hill, who played center field and went 1-for-4 with a double. The 24-year old Hill was signed out of the Frontier League three weeks ago after going undrafted in 2022. He led the summer MLB Draft League in batting average last year.
- Starting pitcher Hayden Birdsong gave up a season-high four runs in 3.1 innings, on three hits and three walks with six strikeouts. Birdsong had not given up more than two earned runs in a game before this on the year. It bumped his ERA from 1.78 to 2.67, and he now has 59 strikeouts to 20 walks in 33.2 innings.
- Reliever Dylan Cumming got the save, striking out one in a scoreless inning. It’s his fifth save of the season for the 30-17 Giants, and he hasn’t given up any runs since allowing three on May 4th. Since then, over seven appearances, he’s dropped his ERA from 4.02 to 2.36.
submitted by BruteSentiment to SFGiants [link] [comments] |
2023.06.01 09:29 WearyString557 Exceeding Expectations: OKR Examples that Deliver
| https://preview.redd.it/m5han54syc3b1.png?width=960&format=png&auto=webp&s=7062ce6ce20f4e9454f7f9ad75643829e8b65eb3 What are OKRs? OKRs, which stands for Objectives and Key Results, is a goal-setting framework used by organizations to define and track their objectives and measure their progress toward achieving those objectives. It was popularized by Intel and later adopted by many successful companies, including Google. In the OKR framework, objectives are the high-level goals that an organization wants to achieve within a specific time frame. They are qualitative and provide a direction for the organization. Key Results, on the other hand, are specific, measurable outcomes that define how progress toward the objectives will be assessed. Key Results are typically quantifiable and time-bound, providing a clear indication of success. The concept behind OKRs is to set ambitious and challenging goals that inspire individuals and teams to strive for excellence. By defining measurable key results, progress toward the objectives can be easily tracked, and alignment and transparency can be fostered within the organization. OKRs are considered one of the best goal-setting frameworks because they encourage organizations to set ambitious goals, create alignment and focus, and foster a culture of transparency and accountability. They provide a structured approach for setting goals and measuring progress, promoting continuous improvement and innovation. To implement OKRs effectively, many organizations use OKR software. OKR software provides a platform for setting, tracking, and managing OKRs across teams and individuals. It allows organizations to create, align, and cascade objectives and key results, monitor progress in real-time and facilitate collaboration and communication around goal achievement. What Does a Bad OKR Look Like? A bad OKR can be characterized by various factors that hinder its effectiveness in driving performance and achieving desired outcomes. Here are some characteristics of a bad OKR: Vague or unclear objectives If the objectives are poorly defined, lacking specificity, or not aligned with the overall strategic direction, it becomes difficult for teams to understand what they are working towards and how to measure success. Irrelevant or disconnected key results Key results should be relevant to achieving the goals. Tracking performance and evaluating the effectiveness of efforts becomes difficult if the key results are not in line with the objectives or do not demonstrate progress. Unrealistic or easily achievable goals Objectives that are too ambitious or unrealistic may lead to demotivation and a sense of failure. On the other hand, objectives that are too easily achievable may not challenge teams to strive for excellence and may not drive significant improvement. Lack of alignment and transparency The best way to ensure alignment at all organizational levels with OKRs is to cascade them from top to bottom. Teams may operate in silos or have competing priorities if there is a lack of alignment and transparency in OKRs. Set-and-forget mentality OKRs require regular tracking, review, and adjustment. If there is no ongoing monitoring or feedback loop, teams may lose sight of their goals and fail to adapt to changing circumstances. Pritrioize on individual performance OKRs should encourage collaboration and teamwork. If they overly focus on individual performance without considering the collective effort, it can lead to a competitive rather than a collaborative culture. OKR Categories https://preview.redd.it/6gy5nyw0zc3b1.png?width=828&format=png&auto=webp&s=e1d600c0097c08cf05eadd320733ff3021cc8de2 OKRs can be categorized into different areas or domains based on the focus of the objectives and key results. Common OKR categories include Company-wide OKRs: These are objectives and key results that align with the overall strategic goals and vision of the organization. Departmental or Team OKRs: These OKRs are specific to a particular department or team within the organization and contribute to the broader company objectives. Individual OKRs: These are goals set by individuals to align their efforts with the team or department objectives. Initiative-based OKRs: These OKRs are focused on specific projects or initiatives undertaken by the organization. Personal Development OKRs: These OKRs focus on individual skill development, learning, and personal growth. Categorizing OKRs helps provide structure and clarity in setting goals and enables better alignment across different levels and areas within the organization. Defining Great Key Results 1. Specific: Key results should be clear and specific, leaving no room for ambiguity or misinterpretation. They should provide a clear target or outcome. 2. Measurable: Key results should be quantifiable or have a way to be objectively measured. They should use numerical values or metrics to track progress and determine success. 3. Achievable: Key results should be challenging yet attainable. They should stretch individuals or teams to push their limits and achieve significant progress but should still be within reach with effort and focus. 4. Relevant: Key results should directly contribute to the overall objective they are tied to. They should align with the desired outcomes and be relevant to the success of the project, team, or organization. 5. Time-bound: Key results should have a specific timeframe or deadline. They should be linked to a specific period, such as a quarter or year, to create a sense of urgency and provide a clear timeline for evaluation. How to Use Our OKR Examples? Understanding Objectives and Key Results (OKRs) as a concept will help you use OKR examples more effectively. - The difference between objectives and key results is that objectives are bold goals that give direction and purpose, while key results are quantifiable results that indicate whether those goals were successfully attained.
- Explore different OKR examples that fit your unique needs and goals after you have a firm grasp of the fundamentals. You can find these examples in books, online sources, or by looking at the OKR frameworks of successful businesses. To fit the particular needs and objectives of your organization, analyze and modify these examples.
- After that, include your team in the OKR-setting process. Define challenging goals with your team to motivate them to pursue excellence. To achieve clarity, make sure that the key results are SMART (specific, measurable, attainable, relevant, and time-bound).
- Track your OKRs’ progress regularly, and use them as the foundation for performance evaluations and feedback sessions. Promote open communication among team members to foster a culture of learning and improvement.
- Keep in mind that OKRs can be modified and improved over time; they are not fixed. To keep your OKRs current and in line with the shifting priorities of your organization, periodically review and update them.
OKR Writing Tips https://preview.redd.it/mih68by5zc3b1.png?width=828&format=png&auto=webp&s=95125b1e7c4a106cd34bd504c4fc9e80896a8b68 Start with a clear objective: State the overall goal or outcome you want to achieve. Example: Increase customer satisfaction ratings. Be specific and measurable: Include key results that can be quantified or easily assessed. Example: Achieve a customer satisfaction rating of 90% or higher. Focus on impact and value: Emphasize the desired impact or value that the objective will bring. Example: Drive a 20% increase in revenue from new customer acquisitions. Use action-oriented language: Use verbs that indicate action and convey what needs to be done. Example: Launch a new marketing campaign targeting millennials. Keep it time-bound: Specify a time frame for achieving the objective or key results. Example: Increase website traffic by 30% within the next quarter. Align with organizational goals: Ensure your OKRs are in line with the overall company objectives. Example: Support the company’s sustainability initiatives by reducing carbon emissions by 10% by the end of the year. Top three benefits of OKRs - Clarity and Focus: OKRs provide a clear direction and focus for individuals, teams, and organizations. By setting specific objectives and measurable key results, everyone knows what they need to achieve and can align their efforts accordingly. This clarity helps prioritize tasks and activities, reducing distractions and enhancing overall productivity.
- Alignment and Collaboration: OKRs promote alignment and collaboration within teams and across the organization. When everyone understands the shared objectives and key results, they can work together towards a common goal. OKRs foster transparency and facilitate communication, enabling teams to coordinate efforts, share resources, and support each other to achieve collective success.
- Performance Tracking and Accountability: OKRs provide a framework for tracking performance and holding individuals and teams accountable. Key results serve as measurable indicators of progress, allowing regular assessment of achievements. By regularly reviewing and updating OKRs, organizations can identify areas of improvement, make necessary adjustments, and ensure continuous growth and development.
OKR Examples Team OKR Examples Objective: Enhance Employee Engagement - KR 1: Increase overall employee satisfaction score by 10% in the annual engagement survey.
- KR 2: Implement a new employee recognition program and achieve a participation rate of 80% within six months.
- KR 3: Reduce voluntary employee turnover rate by 15% through targeted retention initiatives.
Objective: Develop a High-Performing Workforce
- KR 1: Implement a leadership development program and have 80% of managers complete at least one training session within the year.
- KR 2: Improve average employee performance rating by 10% based on quarterly performance evaluations.
- KR 3: Increase the number of employees participating in professional development activities by 20% compared to the previous year.
Objective: Strengthen Diversity and Inclusion - KR 1: Increase diversity in leadership positions by promoting and hiring diverse candidates to fill 30% of executive-level roles.
- KR 2: Conduct unconscious bias training for all employees and achieve a participation rate of 90% within three months.
- KR 3: Establish employee resource groups (ERGs) representing various underrepresented groups and have active participation of 50%
CEO OKR Example Objective: Drive Company Growth and Market Expansion - KR 1: Increase annual revenue by 20% through strategic partnerships and market expansion initiatives.
- KR 2: Achieve a customer retention rate of 90% by implementing customer success programs and enhancing product/service offerings.
- KR 3: Launch two new product lines in the target market to diversify revenue streams and increase market share.
Objective: Foster Innovation and Organisational Agility - KR 1: Implement an innovation program that generates at least three new product/service ideas per quarter and successfully launches one of them.
- KR 2: Increase cross-functional collaboration by implementing agile methodologies in at least three departments, resulting in a 20% reduction in time-to-market for new initiatives.
- KR 3: Establish a feedback loop with employees through regular pulse surveys, resulting in a 10% increase in employee satisfaction and engagement scores.
Objective: Strengthen Leadership and Talent Development - KR 1: Implement a leadership development program that provides coaching and training to at least 80% of the management team, resulting in a 30% increase in leadership effectiveness scores.
- KR 2: Enhance talent acquisition strategies to attract top-tier candidates, resulting in a 20% reduction in time-to-hire for critical roles.
- KR 3: Implement a performance management system that includes regular feedback and goal-setting, resulting in a 15% increase in employee performance ratings and a decrease in turnover rate by 10%
To learn more about CEOs’ OKRs, Visit our blog. Marketing OKR examples Objective: Increase Brand Awareness - KR 1: Increase social media followers by 20%.
- KR 2: Generate 10,000 new leads through content marketing efforts.
- KR 3: Achieve a 15% increase in website traffic from organic search.
Objective: Improve Conversion Rates - KR 1: Increase the conversion rate on the website by 10%.
- KR 2: Reduce shopping cart abandonment rate by 15%.
- KR 3: Improve email click-through rates by 20%.
Objective: Launch a New Product/Service Successfully - KR 1: Generate 500 pre-orders for the new product/service within the first month.
- KR 2: Achieve a customer satisfaction rating of 4 out of 5 for the new product/service.
- KR 3: Generate positive media coverage and secure at least three press mentions.
To learn more about Marketing OKRs, Visit our blog. Social media OKR examples Objective: Increase user engagement on social media platforms. - KR 1: Increase the average number of comments per post by 20%.
- KR 2: Increase the average number of likes per post by 15%.
- KR 3: Increase the average number of shares per post by 10%.
- KR 4: Improve user interaction
Objective: Enhance brand visibility and reach on social media. - KR 1:Increase follower base
- KR 2: Gain 10,000 new followers within three months.
- KR 3: Achieve a 5% increase in follower engagement rate.
- KR 4: Reduce the follower churn rate by 10%.
Objective: Expand social media reach - KR 1: Increase the average impressions per post by 25%.
- KR 2: Grow the reach of social media campaigns by 20%.
- KR 3: Increase the number of organic reach through shares by 15%.
Objective: Drive website traffic through social media channels. - KR 1: Achieve a 30% increase in the number of website visits from social media platforms.
- KR 2: Improve the average time spent on the website from social media referrals by 20%.
- KR 3: Increase the conversion rate from social media referrals by 15%.
- KR 4: Increase referral traffic
Objective: Enhance click-through rates (CTR) - KR 1: Increase the CTR on social media posts by 10%.
- KR 2: Improve the CTR on social media ads by 15%.
- KR 3: Increase the CTR on social media links shared by influencers by 20%.
Objective: Optimize social media campaigns - KR 1: Reduce the cost per click (CPC) of social media ads by 10%.
- KR 2: Increase the click-to-conversion rate for social media ads by 15%.
- KR 3: Improve the ROI of social media campaigns by 20%.
Talent & development OKR example Objective: Strengthen Employee Skills and Competencies - KR 1: Increase the completion rate of employee training programs by 25% within the next quarter.
- KR 2: Achieve a 10% improvement in employee proficiency scores in targeted skill areas through regular assessments and feedback.
- KR 3: Implement a mentorship program where 80% of employees are paired with experienced mentors within six months.
Objective: Foster a Culture of Innovation and Creativity - KR 1: Launch a quarterly idea generation campaign and receive a minimum of 100 unique employee suggestions within the next three months.
- KR 2: Implement cross-functional innovation workshops, involving at least three departments, to generate and execute innovative ideas.
- KR 3: Increase the number of patents filed by employees by 20% compared to the previous year.
Objective: Promote Leadership Development and Succession Planning - KR 1: Identify and develop high-potential employees for key leadership positions, resulting in a 30% increase in internal promotions within the next year.
- KR 2: Implement a leadership training program with a minimum participation rate of 80% among targeted individuals.
- KR 3: Develop succession plans for critical roles, ensuring that 100% of key positions have identified successors within the next six months.
Objective: Enhance Employee Engagement and Satisfaction - KR 1: Conduct an employee engagement survey and achieve a score of 85 or higher on a 100-point scale.
- KR 2: Implement a recognition and rewards program, resulting in a 15% increase in employee satisfaction with recognition efforts within the next quarter.
- KR 3: Establish regular channels for employee feedback and act on at least 80% of actionable feedback within one month.
To learn more about Talent and Development OKRs, Visit our blog. Public Relations and Communications OKR examples Objective: Enhance Brand Awareness and Visibility - KR 1: Increase media mentions by 30% through proactive media outreach and strategic press releases.
- KR 2: Expand social media reach by achieving a 20% growth in followers and engagement metrics across key platforms.
- KR 3: Secure at least two speaking engagements at industry conferences or events to position the company as a thought leader within the next quarter.
Objective: Build and Maintain Positive Media Relations - KR 1: Develop relationships with influential journalists and secure coverage in at least three top-tier media outlets.
- KR 2: Maintain a positive media sentiment score of 80% or higher through effective crisis communication and timely response to media inquiries.
- KR 3: Conduct media training sessions for key executives to improve their media relations skills and interview performance.
Objective: Strengthen Employee Communications and Engagement - KR 1: Launch an internal communication platform and achieve an adoption rate of 90% among employees within the next month.
- KR 2: Increase employee engagement by implementing regular town hall meetings and achieving a participation rate of 75%.
- KR 3: Develop and distribute an internal newsletter highlighting company updates, achievements, and employee spotlights on a monthly basis.
Objective: Establish Thought Leadership and Industry Influence - KR 1: Publish a minimum of 10 high-quality thought leadership articles in industry publications, positioning key executives as experts in their respective fields.
- KR 2: Secure speaking opportunities for executives at industry conferences and webinars to share insights and expertise.
- KR 3: Increase the company’s social media engagement with thought leadership content by achieving a 25% growth in shares, comments, and likes.
Objective: Enhance Stakeholder Relations and Communications - KR 1: Conduct a stakeholder analysis and develop targeted communication plans for key stakeholders, ensuring regular and tailored communication.
- KR 2: Implement a customer feedback program, resulting in a 20% increase in positive customer sentiment and satisfaction scores.
- KR 3: Establish partnerships and collaborations with at least three relevant organizations or influencers to expand reach and engagement.
Committed vs Aspirational Committed objectives: These are objectives that are realistic and achievable within a given timeframe. They are based on the current capabilities, resources, and constraints of the team or organization. Committed objectives are typically set at a level that the team or individual is confident they can accomplish with a reasonable amount of effort and resources. Example: Increase website traffic by 20% in the next quarter. Aspirational objectives: These are stretch goals or ambitious targets that push the boundaries of what is currently possible. Aspirational objectives are designed to challenge individuals or teams to go beyond their comfort zones and strive for significant growth or breakthrough performance. They may require innovative approaches, additional resources, and a willingness to take risks. Example: Achieve a 100% customer satisfaction rating by the end of the year. To learn more about Committed and Aspirational OKRs, Visit our blog where we talk about it in more detail, Conclusion The blog serves as an example of the value of developing challenging yet realistic goals that encourage teams to push their limits. Organizations can monitor progress, spot areas for improvement, and adjust their strategies as necessary by defining clear and quantifiable key results. The use of data-driven decision-making and ongoing learning is illustrated by the examples, and OKRs promote frequent feedback and reflection. Talk to our experts and coaches and gain more insights or Try “ Datalligence” for “ free”. submitted by WearyString557 to u/WearyString557 [link] [comments] |
2023.06.01 08:50 besil Roma vs USA
2023.06.01 07:07 EnvironmentalAd6029 Hypothetical food for thought: would getting the PA Amish population out to vote in large numbers help the GOP? And is it a viable strategy looking into?
2023.06.01 07:00 lptos Application/School list help - CA ORM, average stats, ZERO non-clinical
Hello, I received my MCAT score back recently and would like some input on my school list and application for this cycle. For reference, I graduated in 2022 and am heading into my 2nd gap year now. I am absolutely open to either MD or DO. I do prefer to stay on the coasts, but with how my application is looking, I will be happy going anywhere in the US.
Profile: GPA: 3.69 cGPA, 3.65 sGPA (upward trend)
MCAT: 512 (128/128/128/128)
State of Residence: CA (no ties to any other state)
ORM; Asian male
SES; Low, qualified for FAP
Ghost/Preview: Not taken. Just recently found out about this test; planning on taking this July 2023, but j
Extracurriculars: Clinical paid: - Medical assistant: 2200 hrs; at primary care clinic
- ED Scribe: 1000 hrs
Research: - Undergrad research assistant: 300 hrs; wet lab, no pubs, posters, or presentations.
- Clinical research coordinator: 100 hrs; currently in pre-screen/screening phase
Volunteering: Clinical: - (edit) MA for primary care clinic serving older underserved patients: 400 hours
- Hospital volunteer: 400 hours
- ED volunteer: 80 hours
Non-clinical: - 0 in undergrad
- 600 hours - High School; outreach and registration drives for stem cell donors for those in underrepresented demographics
- 100 hours - High School; tobacco/drug use prevention outreach
Shadowing: - 40 hours endocrine in person
- Does any ED scribe experience count as shadowing since I am following the physician at all times and observing?
LORs: - 1 MD, 1 DO with whom I worked with
- 1 science prof
- Difficulty getting non-science and second science profs :(
I will be applying to MDs where I am at least around median GPA/MCAT. Currently my list is still a very first draft based on those stats. I will also be applying to the big 5 DO schools and a few additional DOs. I know that my non-clinical volunteering is pretty much non-existent. Does highschool volunteering count at all? It was mentioned in one of my LORs.
Are there any schools you would include? If there are any schools that you think I am not competitive for and that I should exclude, please let me know! I knowRush and Loyola typically value non-clinical volunteering, but I feel my stats and other ECs are okay. Any advice is appreciated!
MD: Albany
Central Michigan
Eastern Virginia
George Washington
Howard University of Buffalo (Jacobs)
Temple University (Katz)
Loyola (weak non-clinical volunteering, exclude?)
MCW Wisconsin
Meharry Michigan State (no Michigan ties, exclude?) Morehouse Northeast Ohio Oregon Health and Science University Pennsylvania State
UC Riverside (CA resident; no ties to Inland Empire) UC Davis (required PREview, which I have not yet taken)
Tulane
SUNY-Upstate
Rush (weak non-clinical volunteering, exclude?) University of Vermont (Robert Larner MD)
University of Illinois
(Edit) VCU
(Edit) Oakland Beaumont
(Edit) Drexel
(Edit) Quinnipac
DO: Kansas City University COM
Midwestern University Chicago COM
Des Moines University
Philadelphia COM
ATSU Kirksville COM
Western U - CA
Western U - OR
Touro - CA
Touro - NY
NYIT COM
(Edit) CUSM
submitted by
lptos to
premed [link] [comments]
2023.06.01 06:14 Mariners_bot Post Game Chat 5/31 Yankees @ Mariners
Please use this thread to discuss anything related to today's game. You may post anything as long as it falls within stated posting guidelines. You may also post gifs and memes, as long as it is related to the game. Please keep the discussion civil.
Discord:
Seattle Sports Line Score - Game Over
| 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | R | H | E | LOB |
NYY | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 | 0 | 6 |
SEA | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 5 | 1 | 6 |
Box Score
SEA | IP | H | R | ER | BB | SO | P-S | ERA |
Kirby | 8.0 | 3 | 0 | 0 | 0 | 7 | 95-67 | 3.04 |
Sewald | 1.0 | 0 | 0 | 0 | 1 | 1 | 11-9 | 2.96 |
Topa | 1.0 | 0 | 0 | 0 | 1 | 1 | 19-10 | 3.32 |
Scoring Plays
Highlights
Description | Length | Video |
Bullpen availability for Seattle, May 31 vs Yankees | 0:07 | Video |
Bullpen availability for New York, May 31 vs Mariners | 0:07 | Video |
Fielding alignment for Seattle, May 31 vs Yankees | 0:11 | Video |
Fielding alignment for New York, May 31 vs Mariners | 0:11 | Video |
Starting lineups for Yankees at Mariners - May 31, 2023 | 0:09 | Video |
Clarke Schmidt's outing against the Mariners | 0:23 | Video |
Breaking down Clarke Schmidt's pitches | 0:08 | Video |
Breaking down George Kirby's pitches | 0:08 | Video |
George Kirby's outing against the Yankees | 0:23 | Video |
Gleyber Torres makes a smooth sliding stop in the 2nd | 0:12 | Video |
Ty France makes a crafty stop at first base | 0:16 | Video |
J.P. Crawford makes a spectacular diving grab | 0:19 | Video |
Ty France fouls it off after a review confirmed call | 0:25 | Video |
Julio Rodríguez makes a leaping catch at the wall | 0:18 | Video |
Clarke Schmidt strikes out seven against the Mariners | 0:53 | Video |
George Kirby fans Higashioka to pick up seventh K | 0:08 | Video |
George Kirby strikes out seven over eight innings | 1:05 | Video |
Decisions
Winning Pitcher | Losing Pitcher | Save |
Topa (1-2, 3.32 ERA) | Marinaccio (2-2, 4.00 ERA) | |
Attendance | Weather | Wind |
| 60°F, Cloudy | 8 mph, In From LF |
HP | 1B | 2B | 3B |
CB Bucknor | Chris Segal | Ben May | Brian Walsh |
Game ended at 9:14 PM. submitted by
Mariners_bot to
Mariners [link] [comments]
2023.06.01 05:45 sbpotdbot Sportsbook/Promos/Bonuses Daily - 6/1/23 (Thursday)
Sportsbook and Sports Betting Sign Up Promos and Bonuses
21+ only. If you or someone you know has a gambling problem and wants help, call 1-800-GAMBLER and visit /problemgambling Sportsbook | Promos | Accepted States | Reviews |
Draftkings | Click for Promo Bet $5+ On Any Pre-Game Moneyline And Win $150 In Bonus Bets | AZ, CO, CT, IL, IA, IN, KS, LA, MA, MD, MI, NY, NJ, OH, ON, PA, TN, VA, WV | Reviews |
Fanduel | Click for Promo No Sweat First Bet $1,000 in Bonus Bets | AZ, CO, CT, IA, IL, IN, KS, LA, MA, MD, MI, NY, NJ, OH, PA, TN, VA, WV | Reviews |
Betrivers | Click for Promo 2nd Chance Bonus Bet Up to $500 | AZ, CO, CT, IA, IL, IN, LA, MD, MI, NJ, NY, OH, PA, VA, WV | Reviews |
Unibet | Click for Promo $500 Second Chance Bet | AZ, IA, IN, NJ, ON, PA, VA | Reviews |
Caesars | Click for Promo Place a first-time wager of up to $1,250, get it back in the form of a Bonus Bet if you lose. | AZ, CO, IA, IL, IN, KS, LA, MA, MD, MI, NY, NJ, OH, ON, PA, TN, VA, WV, DC | Reviews |
WynnBet | Click for Promo Bet $100 Get $100 in Bonus Bets | AZ, CO, IN, LA, MA, MI, NJ, NY, TN, VA | Reviews |
Pointsbet | Click for Promo 5x Second Chance Bets up to $50 each | CO, IA, IL, IN, KS, MD, NY, NJ, OH, ON, VA, WV | Reviews |
BetMGM | Click for Promo Up to $1000 paid back in Bonus Bets if you don't win | AZ, CO, DC, IA, IL, IN, KS, LA, MA, MD, MI, MS, NJ, NY, OH, ON, PA, TN, VA, WV, WY, DC | Reviews |
Betfred | Click for Promo $500 First Bet Refund | AZ, CO, IA, LA, MD, OH, PA, VA | Reviews |
Superbook | Click for Promo $250 Every First Bet Wins | AZ, CO, IA, MD, NV, NJ, OH, TN, VA, WV | Reviews |
Tipico | Click for Promo Deposit Match up to $250 | CO, IA, NJ, OH | Reviews |
Bet365 | Click for Promo Bet $1 and get $365 in Bonus Bets | CO, NJ, OH, VA | Reviews |
Megathread Index US Sportsbooks Canada Sportsbooks 21+ only. If you or someone you know has a gambling problem and wants help, call 1-800-GAMBLER and visit /problemgambling submitted by
sbpotdbot to
sportsbook [link] [comments]
2023.06.01 05:23 Exiled_From_Twitter Tiger did play in a fairly weak era of golf, but so did the other greats for the most part: Historical Masters Data as a Proxy
| Recently someone posted the Major results comparison between Brooks and Tiger. It was an interesting comparison, even if not the absolute best way to do it. This was much to the dismay of Tiger stans and brought up some vitriol, exacerbated by my comments that Tiger's era was quite a bit weaker so even if you tried to compare peak to peak it's not quite accurate. Comparing eras is almost impossible in golf, there are just so many variables to consider including equipment changes, course changes / setup, and the fact that two people can look at the exact same thing and come to two completely different conclusions - both with merit. For instance, you could look at Tiger's dominance as a sign that the era was just a bit weak which bolstered his overall results. I think it's a fair question, but how do you "prove" it? Because historical data and results are not very accessible there's just not much out there. I used to have all the major results from circa 1970 through 2015 but it was lost unfortunately and the site I used to get it no longer exists. I could not find anything comparable and I don't know how to use python or other scraping tools unfortunately. However, one thing fairly easy to gather and is mostly clean is the Masters historical data, every single one played since 1934. So that is what I have now, it's clearly not comprehensive but it can be used as a decent proxy. The method was simple - I looked at how many strokes vs. average by round a player was in every tournament and then totaled the rounds - for those who played less than 4 rounds their per round average was multiplied by the appropriate amount to compare to those who did play all 4. There are instances where someone who barely missed the cut ended up having a better total result (though still not good) than someone who made the cut. Some may not like that given that ppl think making the cut is an achievement in itself but if you make the cut then blow up over the weekend you deserve the worse score. With each individual result in every Masters played I could then determine strength of field of each Masters tournament by looking at a moving average of how every player in that field had performed in Masters tournaments in the 5 years before and / or after. This gives a more accurate account of each player in the field at the time they were in the field, for instance if you used simple average then I would be giving too much credit to this years Tiger Woods when we know he's not what he was 15 years ago, and simultaneously discrediting Tiger Woods in the early 2000's where he was clearly better than now. The field strength for each year is determined by the Top 30 golfers in the field (b/c you're not truly playing the entire field) using the moving average of each player's performance +/- 5 years from the year in question. This gives you an indication of how good each player is in that moment moreso than using their entire career (i.e. Arnie's Masters career is not great but that's b/c he played in something like 44 of them and was clearly way out of place in the last 25 or so). With all that explained, here are the "difficulty" results of every Masters since 1960 (they didn't have a cut until like 1957 so a lot of that just doesn't look right): Filled data points are Masters in which Tiger participated When Tiger burst onto the scene the field was getting quite a bit better from the early to mid 90's. Based on the method he already counted towards the field strength b/c of the power of hindsight, but beyond him the field was quite good. In 1997, beyond just Tiger you also had Jose Maria Olazabal, Fred Couples, David Duval, and Phil Mickelson all had very good records at the Masters in the years before and after, all 8 strokes better than the field. In 2004 the field was the best it's been since the mid 60's, largely b/c of Tiger and Phil who were both 12 strokes better than average during that span. But Vijay was also 9 strokes better and the field was pretty deep. But by Tiger's peak, in his late 20's, the field was not great by Master's standards. In 2008 Tiger and Phil were still at the top but the 3rd best golfer in the field was Angel Cabrera, who was a very good 5.9 strokes better than average but that's very low for the 3rd best in the tourney. You see it bounces around a bit thereafter but often the field was certainly worse from 2006 until 2020 when it finally drops into the truly awesome field that we have now. What is interesting is that the field this past April was not as top heavy but is very deep again. For instance, Rory was just the 28th ranked player in this field by this measure (his Masters history is not as good as his total, of course) but at a very respectable 2.8 strokes better than average (Tied for the best 28th place player in any field since 1960). So Tiger's era is a bit everywhere, with it being pretty good in the first few years then falling off towards the backend. BUT there's more... the only problem with this is that the field is taking into account Tiger, but he doesn't play against himself so this is not completely representative of the field that Tiger played against himself. Obviously this is the same throughout - Jack didn't have to face himself, clearly, and thus if trying to determine the field Jack played against you would need to remove Jack from the equation. Of course it's going to be difficult for the best golfers in the world to face the toughest fields b/c the field is facing the best golfers...we can still compare of course. So then what do the fields look like for each golfer with themselves removed from the equation? Filled data points are Masters in which Tiger participated The best fields Tiger has ever played against in the Masters have been in the last two. In 2007 it was one of the worst ever, as the Tiger field was 2 strokes worse relative to average than in the last two years. This is what happens when Chris DiMarco is one of the best golfers in the field..... But this still doesn't tell us how it relates to other golfers all-time. So looking at the best individual performers in Masters history, using their 10 best performances (min 6) vs. the rest of the field strength during those performances.... Is Palmer the Masters GOAT? And as the caption states, Palmer truly stands out here. His best individual performances stack up against the best but the strength of the fields he played against are quite easily better than Nicklaus and Tiger (even though there was overlap). This is largely b/c Palmer faced off against the games stalwarts even before Nicklaus entered the mix, going against Snead, Hogan, and the unsung greats of Middlecoff and Mangrum as well. Then a few short years later he's competing against Nicklaus, Player, Venturi, and Casper Jr (with Snead still there in the early goings of the 60's as well). Long story short - yes, Tiger did face fairly weak fields in his prime but he did also face some pretty good ones and they are comparable to most other great Masters players aside from the incomparable Arnold Palmer. And yes, the field today is very strong and could compete as the strongest of all-time in the near future. submitted by Exiled_From_Twitter to golf [link] [comments] |
2023.06.01 04:44 crobinson2279 Low MCAT scorer post/advice
This Reddit post is made to show low MCAT applicants that it is possible. I do not recommend having a low GPA or MCAT. Still, sometimes that is the reality of the situation. Hence, I want to share my story and experience to hopefully give hope to others.
I typed out my resume below and removed my personal info for anonymity. So usually, when I read these posts on the thread, low mcats are described as 505ish. That is true, but as you can see, I am indeed a low mcat score who received 4 acceptances (2 md WlA and 2 A's from DO).
Things I wish I knew: 1) A low GPA and MCAT are pains in the butt and will make this process harder. Before applying, I went through the entire database of medical schools to see which schools would let my application through. I emailed tons of admissions offices and explained my situation. Most were helpful, saying my application would or would not be screened out. 2) If your scores are low, you need something allowing committee members to go to bat for you. I was a very research-heavy candidate and relied on that and my student job/volunteer work to craft my personal statement and discuss it in interviews. ( I was grilled in one interview that I looked like a Ph.D. candidate and not a medical school applicant, and this was a valid critique) 3) This process is long, and everyone is different. I had friends who received no interviews and others with 3+ acceptances and scholarships. You need a support system to keep you from going crazy. 4)Apply early. ADCOMs have a difficult job and will look for any reason to screen applicants out, and I didn't have the luxury of getting another strike by being late to apply. 5) The WL is not a death sentence. Schools often take the high scorers first to inflate their MCAT and GPA to soften the blow for individuals with lower scorers that they "are taking a chance on" 6) Avoid SDN and Reddit at all costs. If you have a question, ask it, receive an answer, and leave immediately. These websites are a blessing and a curse, full of information, but the people here are hardcore, blunt, and neurotic. 7) You have to plan everything. There is little room for error, and turning in primaries and secondaries is critical (don't rush but move with some urgency). Prewrite essays, and don't try to reinvent the wheel. If you aren't passionate about this, leave immediately not worth your time or money. 8)Craft your list carefully. I didn't qualify for FAP, but I am not a trust fund baby by any standards. I was very selective in the schools I applied to and selected them because I liked their missions, and I knew they would actually look at my application. No point in wasting money that I could spend elsewhere. 9) Your essays need to get you in the door. While your interviews need to show them something. Mentors from other programs read my essays and helped with mock interviews. I had the belief that I needed an opportunity to meet the members and that I could show them I was a good candidate for their school. The people reading your apps and interviewing aren't idiots; they will see your BS. Take some time to think about why you are doing this. 10) The deck may be stacked against you, but not everyone matriculating has a 3.8 and 510+. Some people have low stats, and others got off the WL. It only takes one, I promise.
You can do this!!
Stats: URM From a Northeastern State HBCU graduate MCAT 1st attempt: 501 MCAT 2nd attempt: 497 Cum GPA: 3.68/ ScGPA:3.45 (Repeated orgo II received a D) ECs: 2-year Fellowship/Postbacc at NIH Summer Research Internship @ MIT 2 years of Neuroscience Research in Undergrad (Poster Presentation) Honors Biology Lab (Poster presentation) Awards: University Scholarship to study abroad Business Competition 3rd place award Hackathon award winner Skills: MATLab (intermediate) R-Studio Analysis(novice) C-Sharp (novice)
Community Service: Student Advisor in Guidance -(full-time position 1500+ hours) Something Leadership Initiative (2-week service trip in an African country) Local food bank- 120 hours Honors Society Tutor for students with disabilities- 120 hours
Clinical Experience: Medical Shadow, "Blank" Medical Hospital 150 hours Medical Scholar, Summer Health Professions Education Program "50 shadow hours
Conferences Attended: MIT research Symposium-oral NIH research week-poster Axel Rod symposium-poster Society For Neuroscience-poster "my university research week"- poster
School list: Md: Howard-Wl Meharry Morehouse Loyola-WL->A Tulane-WL->A Hackensack Meridian
Do: Rowan SOM-A Ohio heritage KCUCOM-A PCOM MSUCOM Atsu Message me if you have any questions
submitted by
crobinson2279 to
Osteopathic [link] [comments]
2023.06.01 04:27 GetTherapyBham We have a QEEG brain mapping clinic opening at my office. A lot if the overlap between Jungian ideas and the Beebe model is pretty cool
QEEG Brain Mapping and Neurostim
How does QEEG read personality?
qEEG brain mapping is a powerful tool used by healthcare professionals to analyze various types of brain waves, including delta, alpha, theta, beta, and high beta waves. These waves, with their unique frequencies, provide valuable insights into a person’s neurological functioning and potential cognitive or mental health issues. In order to rank highly on Google SEO, we will delve deeper into what these waves feel like and how they impact thinking.
Delta Waves:
Delta waves are the slowest brain waves, with a frequency of 0.5-4 Hz. They are typically associated with deep sleep and can also be present in coma patients. The sensation of delta waves is often described as a profound state of relaxation, where the mind is in a state of rest and rejuvenation.
Alpha Waves:
Alpha waves have a frequency of 8-12 Hz and are usually observed when a person is awake but relaxed. They are commonly experienced when closing the eyes or practicing meditation. Decreased alpha waves may be linked to anxiety or depression, while increased alpha waves may indicate improved relaxation and stress reduction. The sensation of alpha waves is often described as a state of calmness and peacefulness.
Theta Waves:
Theta waves have a frequency of 4-8 Hz and are typically observed during light sleep or drowsiness. They may also be present during meditation or creative activities. In qEEG brain mapping, an increase in theta waves may be associated with attention deficit hyperactivity disorder (ADHD), while a decrease in theta waves may be associated with cognitive decline in older adults. The sensation of theta waves is often described as a dreamy, introspective state.
Beta Waves:
Beta waves have a frequency of 12-30 Hz and are usually present when a person is awake and engaged in cognitive or physical activities. They are associated with alertness, focus, and concentration. Abnormalities in beta waves can be linked to conditions such as anxiety, depression, and insomnia. The sensation of beta waves is often described as a state of heightened awareness and mental activity.
High Beta Waves:
High beta waves have a frequency of 30-40 Hz and are often associated with intense cognitive or physical activities, such as problem-solving or exercise. An increase in high beta waves in qEEG brain mapping may be associated with conditions such as ADHD or obsessive-compulsive disorder (OCD). The sensation of high beta waves is often described as a state of heightened mental alertness and intense focus.
The MBTI and qEEG Brain Mapping
The Myers-Briggs Type Indicator (MBTI) is a widely used personality assessment that categorizes individuals into 16 distinct personality types based on four dichotomies: extraversion vs. introversion, sensing vs. intuition, thinking vs. feeling, and judging vs. perceiving. Quantitative EEG (qEEG) brain mapping is a diagnostic tool used to measure and map brainwave activity across different regions of the brain. Researchers have explored potential connections between these two domains to establish a relationship between them.
Several researchers have proposed that the various brainwave frequencies observed in a qEEG brain map may correspond to the functions identified in the MBTI. However, the precise relationship between qEEG brain waves and MBTI functions remains a subject of research and debate.
One proposed connection suggests that the alpha brainwave frequency, associated with relaxed wakefulness and meditation, is linked to the MBTI function of intuition. Alpha waves reflect a state of relaxed focus that fosters insight and creativity, which may facilitate the intuition function involving generating insights and making connections based on patterns and associations.
Another proposed connection suggests that the beta frequency, associated with focused attention and alertness, may correspond to the MBTI function of sensing. Beta waves reflect a state of focused attention that enables precise and detailed perception, potentially facilitating the sensing function of gathering data through the senses and paying attention to concrete details and facts.
Furthermore, the theta frequency, associated with daydreaming and creative states, is purported to correspond to the MBTI function of feeling. Theta waves reflect a state of relaxed and open awareness, fostering creative and imaginative thinking that may facilitate the feeling function of evaluating and assessing information based on personal values and emotional responses.
Likewise, the delta frequency, associated with deep sleep and unconscious processing, may correspond to the MBTI function of thinking. Delta waves reflect a state of unconscious processing that supports problem-solving and decision-making, potentially facilitating the thinking function of analyzing and evaluating information based on logic and reason.
However, it is important to note that while some correlations between qEEG brain waves and MBTI functions have been proposed, conclusive evidence for these connections is lacking. The brain is a complex and dynamic system, and it is unlikely that a single brainwave frequency can fully account for a specific cognitive or personality function. Additionally, the MBTI relies on self-report assessments, introducing biases and limitations.
Nonetheless, exploring the potential connections between the different brainwaves observed in a qEEG brain map and the functions identified in the MBTI can yield valuable insights into the relationship between brain activity and personality.
Interpretation of QEEG Brain Maps:
QEEG brain maps are generated by analyzing the electrical activity of the brain recorded through specialized caps with multiple electrodes placed on the scalp. These maps typically display different brain speeds, including delta, theta, alpha, beta, and high beta, which correspond to different states based on circadian rhythms. Interpretation of these brain speeds involves analyzing the colors displayed on the map, which indicate whether the brain is using these speeds at higher or lower levels than optimal.
Colors on the QEEG brain map:
The colors on the QEEG brain map play a crucial role in interpreting the brain’s activity. Yellow, orange, and red colors indicate that the brain is using one to three levels too high of a particular speed, while blue colors suggest that the brain is using one to three levels too low of that speed. This color-coded information helps in identifying any imbalances or irregularities in brain activity, providing valuable insights into the functioning of the brain.
Overall power and relative power:
The top row of heads on the QEEG brain map represents the overall power of each brain speed, indicating how charged up the brain is overall. This information helps in understanding the overall activity levels of different brain speeds. Additionally, the relative power displayed on the map shows which brain speed is being used the most and the least in comparison to others. This data provides important clues about the brain’s dominant and less dominant activity levels, aiding in the interpretation of QEEG brain maps.
Parameters at the bottom of the map:
The QEEG brain maps also include parameters at the bottom of the map that provide insights into the communication between different brain areas. These parameters, including amplitude, asymmetry, coherence, and phase lag, represent the networks in the brain and how different areas communicate with each other. For instance, frontal areas responsible for attention and executive function are labeled with “F,” central areas with “C,” temporal areas with “T,” and occipital areas with “O.” The analysis of these parameters and the lines connecting different areas on the map help in understanding the efficiency of communication between brain regions.
Z-Score
The Z-score coherence is a measure of functional connectivity between two regions of the brain. It provides an estimate of the strength of the coherence between the signals recorded from different electrode sites, compared to a database. The coherence is a measure of the degree to which two signals are synchronized or correlated, indicating the degree of functional connectivity between different brain regions. The Z-score is a statistical measure of how far the coherence value is from the average coherence value in the normative database.
The Z-score amplitude is a measure of the power or strength of the electrical activity in a particular frequency band within a specific region of the brain. The amplitude is the measurement of the size or magnitude of a particular EEG wave. The Z-score amplitude is the statistical comparison of the amplitude value of a particular frequency band within a specific region of the brain compared to a normative database.
Both Z-score coherence and amplitude are useful in the assessment of brain function and dysfunction. They can provide valuable information about the patterns of brain activity associated with various neurological and psychiatric conditions, such as attention deficit hyperactivity disorder (ADHD), depression, anxiety, and traumatic brain injury. Z-score coherence and amplitude can also be used to guide neurostimulation treatments to target specific brain regions and frequencies for optimal outcomes.
Amplitude Asymmetry
Amplitude asymmetry refers to the difference in the electrical activity between the left and right hemispheres of the brain. It is typically measured as the difference in amplitude between homologous electrode sites located on each hemisphere. An abnormal amplitude asymmetry may suggest a disruption in the normal functioning of the brain, and has been associated with various neurological and psychiatric conditions, including depression, anxiety, and schizophrenia.
Phase Lag
Phase lag is a measure of the delay in the propagation of neural signals between different regions of the brain. It is a measure of the temporal relationship between two or more EEG signals recorded from different electrode sites. Phase lag is typically calculated by measuring the time delay between two signals at a given frequency. An abnormal phase lag may suggest a disruption in the normal communication between different brain regions, and has been associated with various neurological and psychiatric conditions, including attention deficit hyperactivity disorder (ADHD), autism, and traumatic brain injury.
Implications of QEEG Brain Map Interpretation:
Interpretation of QEEG brain maps can have significant implications for understanding brain function and identifying any abnormalities or imbalances in your brain. By analyzing the brain’s activity levels, dominant and less dominant patterns, and communication between different brain areas, QEEG brain maps can provide valuable insights into the functioning of the human brain. This information can be used in various clinical and research settings, such as identifying neurological disorders, monitoring treatment progress, and optimizing cognitive performance.
submitted by
GetTherapyBham to
Jung [link] [comments]
2023.06.01 04:26 GetTherapyBham We have a QEEG brain mapping clinic opening at my office. A lot if the overlap between Jungian ideas and the Beebe model is pretty cool
QEEG Brain Mapping and Neurostim
How does QEEG read personality?
qEEG brain mapping is a powerful tool used by healthcare professionals to analyze various types of brain waves, including delta, alpha, theta, beta, and high beta waves. These waves, with their unique frequencies, provide valuable insights into a person’s neurological functioning and potential cognitive or mental health issues. In order to rank highly on Google SEO, we will delve deeper into what these waves feel like and how they impact thinking.
Delta Waves:
Delta waves are the slowest brain waves, with a frequency of 0.5-4 Hz. They are typically associated with deep sleep and can also be present in coma patients. The sensation of delta waves is often described as a profound state of relaxation, where the mind is in a state of rest and rejuvenation.
Alpha Waves:
Alpha waves have a frequency of 8-12 Hz and are usually observed when a person is awake but relaxed. They are commonly experienced when closing the eyes or practicing meditation. Decreased alpha waves may be linked to anxiety or depression, while increased alpha waves may indicate improved relaxation and stress reduction. The sensation of alpha waves is often described as a state of calmness and peacefulness.
Theta Waves:
Theta waves have a frequency of 4-8 Hz and are typically observed during light sleep or drowsiness. They may also be present during meditation or creative activities. In qEEG brain mapping, an increase in theta waves may be associated with attention deficit hyperactivity disorder (ADHD), while a decrease in theta waves may be associated with cognitive decline in older adults. The sensation of theta waves is often described as a dreamy, introspective state.
Beta Waves:
Beta waves have a frequency of 12-30 Hz and are usually present when a person is awake and engaged in cognitive or physical activities. They are associated with alertness, focus, and concentration. Abnormalities in beta waves can be linked to conditions such as anxiety, depression, and insomnia. The sensation of beta waves is often described as a state of heightened awareness and mental activity.
High Beta Waves:
High beta waves have a frequency of 30-40 Hz and are often associated with intense cognitive or physical activities, such as problem-solving or exercise. An increase in high beta waves in qEEG brain mapping may be associated with conditions such as ADHD or obsessive-compulsive disorder (OCD). The sensation of high beta waves is often described as a state of heightened mental alertness and intense focus.
The MBTI and qEEG Brain Mapping
The Myers-Briggs Type Indicator (MBTI) is a widely used personality assessment that categorizes individuals into 16 distinct personality types based on four dichotomies: extraversion vs. introversion, sensing vs. intuition, thinking vs. feeling, and judging vs. perceiving. Quantitative EEG (qEEG) brain mapping is a diagnostic tool used to measure and map brainwave activity across different regions of the brain. Researchers have explored potential connections between these two domains to establish a relationship between them.
Several researchers have proposed that the various brainwave frequencies observed in a qEEG brain map may correspond to the functions identified in the MBTI. However, the precise relationship between qEEG brain waves and MBTI functions remains a subject of research and debate.
One proposed connection suggests that the alpha brainwave frequency, associated with relaxed wakefulness and meditation, is linked to the MBTI function of intuition. Alpha waves reflect a state of relaxed focus that fosters insight and creativity, which may facilitate the intuition function involving generating insights and making connections based on patterns and associations.
Another proposed connection suggests that the beta frequency, associated with focused attention and alertness, may correspond to the MBTI function of sensing. Beta waves reflect a state of focused attention that enables precise and detailed perception, potentially facilitating the sensing function of gathering data through the senses and paying attention to concrete details and facts.
Furthermore, the theta frequency, associated with daydreaming and creative states, is purported to correspond to the MBTI function of feeling. Theta waves reflect a state of relaxed and open awareness, fostering creative and imaginative thinking that may facilitate the feeling function of evaluating and assessing information based on personal values and emotional responses.
Likewise, the delta frequency, associated with deep sleep and unconscious processing, may correspond to the MBTI function of thinking. Delta waves reflect a state of unconscious processing that supports problem-solving and decision-making, potentially facilitating the thinking function of analyzing and evaluating information based on logic and reason.
However, it is important to note that while some correlations between qEEG brain waves and MBTI functions have been proposed, conclusive evidence for these connections is lacking. The brain is a complex and dynamic system, and it is unlikely that a single brainwave frequency can fully account for a specific cognitive or personality function. Additionally, the MBTI relies on self-report assessments, introducing biases and limitations.
Nonetheless, exploring the potential connections between the different brainwaves observed in a qEEG brain map and the functions identified in the MBTI can yield valuable insights into the relationship between brain activity and personality.
Interpretation of QEEG Brain Maps:
QEEG brain maps are generated by analyzing the electrical activity of the brain recorded through specialized caps with multiple electrodes placed on the scalp. These maps typically display different brain speeds, including delta, theta, alpha, beta, and high beta, which correspond to different states based on circadian rhythms. Interpretation of these brain speeds involves analyzing the colors displayed on the map, which indicate whether the brain is using these speeds at higher or lower levels than optimal.
Colors on the QEEG brain map:
The colors on the QEEG brain map play a crucial role in interpreting the brain’s activity. Yellow, orange, and red colors indicate that the brain is using one to three levels too high of a particular speed, while blue colors suggest that the brain is using one to three levels too low of that speed. This color-coded information helps in identifying any imbalances or irregularities in brain activity, providing valuable insights into the functioning of the brain.
Overall power and relative power:
The top row of heads on the QEEG brain map represents the overall power of each brain speed, indicating how charged up the brain is overall. This information helps in understanding the overall activity levels of different brain speeds. Additionally, the relative power displayed on the map shows which brain speed is being used the most and the least in comparison to others. This data provides important clues about the brain’s dominant and less dominant activity levels, aiding in the interpretation of QEEG brain maps.
Parameters at the bottom of the map:
The QEEG brain maps also include parameters at the bottom of the map that provide insights into the communication between different brain areas. These parameters, including amplitude, asymmetry, coherence, and phase lag, represent the networks in the brain and how different areas communicate with each other. For instance, frontal areas responsible for attention and executive function are labeled with “F,” central areas with “C,” temporal areas with “T,” and occipital areas with “O.” The analysis of these parameters and the lines connecting different areas on the map help in understanding the efficiency of communication between brain regions.
Z-Score
The Z-score coherence is a measure of functional connectivity between two regions of the brain. It provides an estimate of the strength of the coherence between the signals recorded from different electrode sites, compared to a database. The coherence is a measure of the degree to which two signals are synchronized or correlated, indicating the degree of functional connectivity between different brain regions. The Z-score is a statistical measure of how far the coherence value is from the average coherence value in the normative database.
The Z-score amplitude is a measure of the power or strength of the electrical activity in a particular frequency band within a specific region of the brain. The amplitude is the measurement of the size or magnitude of a particular EEG wave. The Z-score amplitude is the statistical comparison of the amplitude value of a particular frequency band within a specific region of the brain compared to a normative database.
Both Z-score coherence and amplitude are useful in the assessment of brain function and dysfunction. They can provide valuable information about the patterns of brain activity associated with various neurological and psychiatric conditions, such as attention deficit hyperactivity disorder (ADHD), depression, anxiety, and traumatic brain injury. Z-score coherence and amplitude can also be used to guide neurostimulation treatments to target specific brain regions and frequencies for optimal outcomes.
Amplitude Asymmetry
Amplitude asymmetry refers to the difference in the electrical activity between the left and right hemispheres of the brain. It is typically measured as the difference in amplitude between homologous electrode sites located on each hemisphere. An abnormal amplitude asymmetry may suggest a disruption in the normal functioning of the brain, and has been associated with various neurological and psychiatric conditions, including depression, anxiety, and schizophrenia.
Phase Lag
Phase lag is a measure of the delay in the propagation of neural signals between different regions of the brain. It is a measure of the temporal relationship between two or more EEG signals recorded from different electrode sites. Phase lag is typically calculated by measuring the time delay between two signals at a given frequency. An abnormal phase lag may suggest a disruption in the normal communication between different brain regions, and has been associated with various neurological and psychiatric conditions, including attention deficit hyperactivity disorder (ADHD), autism, and traumatic brain injury.
Implications of QEEG Brain Map Interpretation:
Interpretation of QEEG brain maps can have significant implications for understanding brain function and identifying any abnormalities or imbalances in your brain. By analyzing the brain’s activity levels, dominant and less dominant patterns, and communication between different brain areas, QEEG brain maps can provide valuable insights into the functioning of the human brain. This information can be used in various clinical and research settings, such as identifying neurological disorders, monitoring treatment progress, and optimizing cognitive performance.
submitted by
GetTherapyBham to
JungianTypology [link] [comments]
2023.06.01 03:56 AlecGoLdStEiN Outjerked again