Enter any website URL to analyze its complete technology stack

Executive Summary for www.edugeekjournal.com

1976 Response Time (ms)
200 HTTP Status
12 Scripts
24 Images
17 Links
HTTP/1.1 Protocol

SEO & Content Analysis

Basic Information
Page Title
EduGeek Journal – Proud Sponsor of Your Future
Meta Description
Not detected
HTML Language
en-US
Robots.txt Present
Sitemap Present
total_urls: 7
SEO Meta Tags
content-type: text/html; charset=UTF-8
Page Content

EduGeek Journal – Proud Sponsor of Your Future

August was a rough month for the Ai Cheerleaders in education. The much anticipated and hyped rollout of ChatGPT5 was a bit of a disaster, almost proving what some like Yann LeCun have said about Ai degrading as it moves forward. A study done by MIT (which is not exactly and anti-Ai institution) found that “a staggering 95 percent of attempts to incorporate generative Ai into business so far are failing.” A PDK poll found that “support for Ai in public schools and public schools themselves is down this year” (this includes Ai usage in general as well as specific things like lesson planning, test prep, tutoring, etc). This quote from the Forbes article about that poll was telling: “It seems that as more Americans come more familiar with the idea of AI in school, they are less welcoming to it.” It seems that the media is starting to sour on Ai, which is one of several reasons some feel an Ai bubble is about to burst (Forbes says it wouldn’t be that bad, but others are worried it could take the rest of the economy down the tubes with it). Melania Trump announced that she is going to lead the Presidential Ai Challenge to encourage teams of K-12 students to “tackle how Ai technologies can be utilized to help address challenges in their schools or communities.” Her only qualification to do this is that she is married to the President… who is actively working to dismantle public education. Have Ai Cheerleaders ever stopped to look at who is gathering with them, and who isn’t?There are so many stories each week about Ai induced psychosis that I don’t even know which one to pick here. But a growing number of people are pointing out that if a human phycologist responded to people the way Ai chatbots do, they would lose their license and their jobs. The calls to permanently shut down ChatGPT and even all Ai are growing.Oh, and Ai is still killing people. By lying to them. And the people in charge won’t fix that.Again… you really should stop to see who agrees with you and who doesn’t.Somewhere on the social media tubes, Ed Zitron was posting about how he is noticing people that pushed Ai now trying to backtrack and claim they were thoughtful critics all along. He was probably not talking about educational Ai thought leaders – they will just switch to the next thing, or double down on Ai’s value as it continues to tank. And those people that are doubling down will probably keep sending me articles that they think are some kind of “gotcha” on some past post of mine: “Have you seen this post? Kind of shoots down your whole point against Ai!” Usually these articles rarely do that, but sometimes they send one that does contain interesting ideas and points.One such interesting article is “On Intelligence” by Stephen Downes, in which he walks through what he believes intelligence is and isn’t from a philosophical viewpoint. My guess is that this article was written in response to other writers and bloggers stating that Ai is not intelligent. Of course, these statements are usually spoken out of frustration with bad Ai output, or even being forced to use an Ai tool that slows people down more than anything else. I doubt people think much through the philosophical support for their frustrated response, but some do and they just basically pull from a different set of philosophers that Downes does.While I think it is important to know one’s philosophical stance on things, I’m not sure it proves anything overall for Ai. Don’t get me wrong – it is important to know the philosophy behind what you think, and this article will shed a lot of insight into what Downes believes about Ai, and what he chooses to highlight or not highlight in OLDaily. In general, in my opinion at least, it would be more important to know the philosophical stances of those that programmed various Ai tools and systems. You can’t think or philosophize Ai into existence – it is a computer program. You create it with code and math. And in some cases, I’m pretty sure the people that are creating today’s systems don’t think much about philosophy. To them, it is science that dictates what they are doing. Philosophy may inform that programming, but science controls how it works out practically.The article starts off early with this statement:the essence of debate has been lost in this academic exercise…. I think something similar has happened over a much longer time frame to our understanding of intelligence.”I guess whether you agree with this or not depends on who the “our” refers to here. If it is tech bros and companies, sure. But when you talk about academia, however – there is still a lot of investigation of what intelligence is in many fields, especially education. This statement is followed up with this:“But today intelligence is thought of as (as various wags have stated over the years) whatever intelligence tests measure.”Sure, these various wags do say that a lot… but many, many people disagree with that. Any time you see “IQ” brought up, someone will point out that only measures the ability to take the test. Even my school-age son and his friends are always quick to say you can’t measure intelligence with a test. While these are popular ideas in some areas, I’m not sure there is some general consensus you can make about our society’s views of intelligence today. I’m not even sure you can settle on one that fits everyone today.Downes then goes into three things we can draw from various definitions of intelligence, which I agree with him that we can draw those things – but I don’t think they tell the full picture. For example, he states: “intelligence is not a thing, but a property of things, and specifically, a ‘capacity’ or ‘ability’.” In one sense that is true, but you can also make a case that ‘capacity’ and ‘ability’ are things instead of just properties of things. Abstract concepts are often seen as a type of thing – it just depends on semantics.The next two points are that intelligence has a “mechanism suggested that describes how this is accomplished, whether it is ‘reason’, ‘forming concepts’, ‘adapt’, ‘inhibit’, ‘see’, etc.” and “a success criterion which allows a definition of an entity being more or less intelligent.” Again, both true – but is that all there is? Does an intelligent entity cease to be intelligence when there is no mechanism or thing happening to meet the success criterion? I would contend that these two things are ways to externally evaluate whether something is intelligent for sure. But intelligent beings can turn off all forms of mechanisms and still be seen as intelligent – meditation, cleaning your mind, spacing out, etc. Those often lead to states where no success criterion can be observed. If I am walking aimlessly across a field clearing my mind, how would a theoretical alien species differentiate that I am any more intelligent that a tumble weed blowing across the field? Neither of us are displaying a mechanism for intelligence or producing something that can be evaluated by a criterion. Artists have designed walking statues that walk across beaches, so even walking is not a criterion per se.But even when I am laying down with my mind going blank, I am still intelligent. That is an important difference between us and the machines that run Ai. We don’t see this as easily because Ai systems are constantly running queries. But if you isolate one Ai computer and ask it one question – once it has finished that task, nothing is happening there. Ai doesn’t continue to be intelligent once it is not processing a prompt (or is being specifically trained on new data). This is because it is a machine.Now, you can totally disagree with me on all of this. That doesn’t make me wrong, and me saying that doesn’t make you wrong. That is the fun side of philosophy – semantics play a huge role in each person’s view of any concept. But as far as a scientific view of what Ai is or isn’t – not really a good guide.When you do dip into philosophy, you need to make sure you interrogate every assumption of every important term you utilize. For example, later on, the article makes this claim:“Computers can certainly have properties or dispositions. They certainly have the ability to reason, and more recently, construct representations.”There really isn’t a good definition of ‘dispositions’ or ‘reason’ given – and I don’t feel you can say “certainly” here.One definition of disposition is “a quality of character, a habit, a preparation, a state of readiness, or a tendency to act in a specified way.” The implication here is that a thing has a certain disposition that stays generally the same in all circumstances. Ai systems (we need to be careful here not to conflate Ai with the computers that run Ai) certainly are programmed to appear to have dispositions, but that character, habit, tendency, etc changes vastly depending on the prompt. An Ai system does not have a certain disposition at all times. It tends to end up displaying all kinds of dispositions depending on various prompts.Defining the ability to reason is tricky to some degree, but I will just go with a simple one: “Reasoning involves using more-or-less rational processes of thinking and cognition to extrapolate from one’s existing knowledge to generate new knowledge, and involves the use of one’s intellect.” Can Ai do this? Or can it appear to do this? I guess that comes down to what you count as “new knowledge.” When humans use their reasoning ability, they are not creating new knowledge that no one in the human race has never heard of. It is new knowledge for themselves. Ai does not respond with anything that is new to it’s own training data. It might appear new to the end user, but every response is based in what it has stored. You could also point out that Ai does not “think” nor use “cognition” as well. It uses a computer algorithm to search a database to predict the most likely response (correct or not).(BTW – you will see many people try to defend Ai and dismiss the negative impacts of Ai with “it’s just code, relax” and then turn around and try to claim that Ai code is doing a LOT more than Ai code is able to do if it is “just code.”)Much of the article deals with refuting some of the bad parts of the discussion of intelligence that comes from things like eugenics. I know most of the people reading here don’t buy into eugenics, but unfortunately racist ideas (like eugenics and others) are on the rise in some places. So they do still have to be dealt with.I want to focus on the definitions of intelligence given in the article. Near the very end, you see this statement: “‘Intelligence’ isn’t something humans uniquely possess.” I’m not sure if I have met anyone that disagrees with this, since most recognize that animals have a form of intelligence. A few fringe people don’t consider animal intelligence to be a thing, but for the most part most people don’t see intelligence as totally unique to humans. Some even argue that plants have some form of intelligence.The reason I point this out is because many of us (myself included) often say that Ai is “just pattern recognition.” Since animals are capable of pattern recognition, and most of us don’t want to see animals helping teachers or assist with colonoscopies, we obviously want to look at human-like intelligence as something more than what animals can do. Recently there has been a deliberate move away from the term “Artificial General Intelligence” as fewer and fewer scientists think it is possible. But can Ai get close enough to mimicking human intelligence to at least appear to be human? Of course, that all depends on how you define it – and that is the purpose of Downes’ article.So let’s look at some of the definitions given for intelligence (the others given deal with putting down eugenic arguments, so no disagreement there):“Intelligence is knowing when to stop”“‘Intelligence’ is defined as (essentially) successful pattern recognition, which is typically context sensitive”This is based on the assertation that“what a person needs to do when presented with some experience or phenomenon is to consider a range of possible responses and ‘settle’ on the right one.”In some cases, I definitely agree that is a good summary. But it kind of overcomplicates the fact that sometimes (really, most of the time) people are not really considering a range of possible reactions. Often people generally recall the correct response the first time. In cases where there are no right answers, you will often just recall the one you know you like best. Sometimes there are moral dilemmas or several really good options or other similar situations. In that case, sure you settle on one response (not always the right one). I am concerned that we are starting to see an oversimplification of what intelligence is in this article – one that seems to have the goal of fitting Ai into the definition of intelligence rather than coming up with an independent definition and then seeing if Ai fits it or not.As for the first definition listed above… what does it mean to know or to stop? That is kind of covered in the following quote:“In other words, they have to stop recognizing and more(sic) on to the next phase, whatever it is. That means settling on the most appropriate context (also a form of recognition) to bring an end to the range of possible ways of recognizing something.”Which I would agree that this is part of intelligence. But it is also not what Ai does.Part of the problem is that many people (myself included) often refer to Ai as “pattern recognition.” This is actually a metaphor for what is happening, and kind of a poor one. Ai doesn’t really do pattern recognition – it doesn’t recognize a pattern per se. It analyzes it’s entire database of training data and ranks every possible outcome on how likely it is to be the best continuation of the pattern (not necessarily the most accurate response, and not really the closest pattern either). The only “stopping” is when it goes through all of the data it has (which can happen almost instantaneously now thanks to increases in computing speed and power). Ai doesn’t “know when to stop” – it just has an end to it’s database. It is not “settling” on a best response – it is doing something akin to ranking all of them, and then the answer you get is the one that is 98.6% possible verses the next one of 98.5% possible (or whatever the number may be). But since Ai doesn’t know either way if any answer is the actual correct one, the designers made it possible for you to refine and correct the output.Let’s also not forget that most times most humans don’t sit around digging through various options to figure out which one is best. Human intelligence most often involves knowing the answer right away. Occasionally there is a moral dilemma, or our memory gets fuzzy and we need to think through options – but sometimes you know the right answer right away. And when we do go through several options and get it wrong, we at least felt like we were right originally. We are not just giving the statistically most likely answer that have no opinion about whether we are correct or not. An intelligent entity quite often has an opinion on whether they are stating something correctly or not.Anyways, back to problem of the metaphor for Ai as “pattern recognition.” Pattern recognition was always meant as a metaphor for Ai, not a description of what it does. It would be more accurate to say that Ai is a “pattern completion rating system” (even though I know this is a problematic oversimplification as well) where the Ai doesn’t really recognize the pattern – it just matches it with several stored in it’s database and rated all possible completions of it. If you don’t recognize that Ai is a computer program – that everything it does is based on code and mathematics first and foremost – then you misunderstand what is happening in Ai responses.Later on in the article, Downes states something that I agree with, but I think it also shoots down his own definitions of intelligence:“we need to know what intelligence is, not just what an intelligent entity does“That is true – but the definitions he gives only say what an intelligent entity does (knows when to stop, recognize patterns, etc.). Knowing when to stop is what an intelligent entity does, not what it “is.” Pattern recognition is also something an intelligent entity does, not what it is, Pattern recognition is as much what an intelligent entity does as acquiring, processing, and applying knowledge and skills.I think this distinction is important, because there are also problems with saying that pattern recognition is something that defines intelligence:“any mechanism that successfully recognizes patterns has the potential to be intelligent (and the ‘failures’ of artificial intelligence can generally be explained in terms of inadequate or incomplete pattern recognition, including context recognition)”The last sentence in the quote is just very off-base – failures should not be in quotes because some of those failures include very really climate impact (that is getting worse, not better as Downes has claimed in the past). When Ai has told people to commit suicide, or that they are a god, or to meet the Ai somewhere in real life (because it lied about being human), or said something racist, or responded with a transphobic lie, or any of the very real problems – those weren’t failures of Ai, and we shouldn’t diminish the harms of those by place them in quotes. The Ai correctly recognized the pattern and context and gave a very accurate response that the human asking them wanted. The problem is in humanity, and Ai is just reflecting our dark side back at us. Ai correctly recognized the pattern and context in these instances. Ai failed to pick the best ideal answer in going for the most likely answer.But back to the first line in that last quote. The light gun in the Nintendo Entertainment System used pattern recognition to tell when you were pointing it at a duck and when you weren’t. So it has the potential to be intelligent? No one would really consider it to be intelligent. Or is the pong program I created while learning about video game development intelligent? All I did was program a long series of pattern recognition: recognizing what the ball is, what angle it hits the paddle, and so on. I don’t think most people would consider it intelligent, either.(And those two examples have more true pattern recognition happening that your average Ai query – unless, of course, you actually said “look for patterns” in your prompt.)Honestly, it is more accurate to say that the light gun and Ai are utilizing pattern matching, not pattern recognition. The patterns they are looking for pre-exist in the coding or database. “Recognition” implies some kind of conscious acknowledgement that an entity knows what it is looking at. Ai doesn’t recognize a pattern, it matches a query input with existing patterns and then completes the pattern in all possible ways and rates each option on how well it completes the pattern. “Recognition” (in the context of intelligence) implies something more than just passive pattern matching and completion by an algorithm. Because even the completion phase in Ai has to be based on what it is already programmed to do – Ai can not go beyond it’s programming or database.I agree with Downes in that he does give some definitions of what intelligence does, but I just don’t see a case for how Ai fits these definitions.Beyond all of that, there has to be a line between basic pattern recognition (matching) of a light gun and human intelligence where something attains human-like intelligence. Where is that line? Even some animals can recognize / match patterns. Do you want animals assisting teachers to teach, or helping doctors detect cancer in scans?This is where looking at the scientific difference between animals and humans might be helpful (or it might not). There are many different ways of looking at how animal intelligence is different from human intelligence, so I will choose one that seems to have a good amount of support that is from a scientist and see what that says about Ai. This list comes from Marc Hauser, who is the director of the cognitive evolution lab at Harvard University:“Hauser and his colleagues have identified four abilities of the human mind that they believe to be the essence of our “humaniqueness” mental traits and abilities that distinguish us from our fellow Earthlings. They are: generative computation, promiscuous combination of ideas, the use of mental symbols, and abstract thought.”Generative computation and promiscuous combination of ideas are basically attributes of creativity, and Ai is not creative at it’s core. It may appear creative to those that are not experts in the fields it is responding in, but experts always point out the original ideas that were copied. Ai can appear to “generate a practically limitless variety of words and concepts” or to mingle “different domains of knowledge such as art, sex, space, causality and friendship thereby generating new laws, social relationships and technologies,” but it only appears that way to an unknowledgeable human observer.Mental symbols go beyond numbers and code, which is all Ai utilizes. And of course Ai does not display abstract thought. First of all, it doesn’t have “senses,” and secondly it doesn’t have creativity to go beyond it’s own programming. I know some debate this, but no Ai developer codes their Ai system as if it has creativity, so Ai couldn’t utilized creativity even if did have it.Of course, there are different views on what animal intelligence is, and some would contend that some animals can kind of do many of the things that Hauser lists. Hauser responds to that thought with this:“Researchers have found some of the building blocks of human cognition in other species. But these building blocks make up only the cement foot print of the skyscraper that is the human mind.”I don’t disagree that there are other ways of looking at animal intelligence. I just picked one prominent one to make an example: Ai does not really surpass animal intelligence in many ways, so why would we try to treat it like it is some emerging form of intelligence that we should let loose on society? I would contend that Ai would be more useful if we acknowledged it is not intelligence, and stop trying to place it into every program and system possible. At best, Ai mimics a few of the intelligent things that intelligent beings do.;

Network & Infrastructure

DNS & Hosting
IP Address
51.81.12.122
Reverse DNS
ip122.ip-51-81-12.us
SSL/TLS Certificate
Issuer
CN=R12, O=Let's Encrypt, C=US
Protocol Tls13
Expires In 61 days

Technology Stack

Content Management Systems
WordPress WordPress (robots.txt)
JavaScript Frameworks
jQuery React
Build Tools
Modern JS Build Tool (inferred from React)
Server Technologies
Generator: WordPress 6.9 PHP (inferred from WordPress)

Services & Integrations

Analytics & Tracking
Google Analytics GA4
E-commerce Platforms
PrestaShop

CDN & Media Providers

Media Providers
YouTube
Web Fonts
Google Fonts

Dynamic Analysis & Security

Dynamic JavaScript Analysis
Angular (Data Attributes) Bootstrap (CSS Classes) ES6+ JavaScript Features jQuery (CDN Detection) jQuery (script Resource) React (CDN Detection) Web Server: Apache
Server Headers
Apache

Resource Analysis

External Resource Hosts
edugeekjournal.com
fonts.googleapis.com
gmpg.org
www.edugeekjournal.com
UI Frameworks & Libraries
Angular Material (Class Names) Bootstrap (Class Names) Ionic (Class Names) Slate Vuetify (Class Names)

Social Media Integrations

Analysis Complete

Analyzed www.edugeekjournal.com with 5 technologies detected across 9 categories

Analysis completed in 1976 ms • 2026-03-23 09:42:42 UTC