Featured Post

A convex polyhedron with 90 vertices recently claimed to be the first known without the Rupert property



A polyhedron is said to have the Rupert property if there is a way to cut a hole through it so that an identical copy of the original polyhedron can pass through it.  For example, you can see that the cube has the Rupert property in the following plot.

P := plots:-polyhedraplot([0,0,0], polytype=hexahedron, style=line, color="DarkBlue", axes=none):
Q :=  plots:-polyhedraplot([0,0,0], polytype=hexahedron, style=line, color="Gold", axes=none):
plots:-display(Q, plottools:-rotate(P, Pi/3, Pi/4, Pi/8), orientation=[0,0,0]);

The edge-on view of the yellow cube shows exactly the hole you'd need to cut through the blue cube so that a cube of the same size can pass through.

There was previously no convex polyhedron known to not satisfy the Rupert property (which is not to say that all known polyhedra do satisfy it, just that none had been proven not to). But a very recept preprint submitted to the ArXiv https://arxiv.org/abs/2508.18475 constructs a polyhedron the authors call the Noperthedron and they claim the answer to the question "Does it have the Rupert property?" is a resounding "Noperthedron ".

The paper gives a construction of this polyhedron, and it is easily built in Maple

R__z := alpha -> Matrix([[cos(alpha),-sin(alpha),0], [sin(alpha), cos(alpha),0],[0,0,1]]):

C__1 := 1/259375205 * <152024884, 0, 210152163>:
C__2 := 10^(-10)*<6632738028, 6106948881, 3980949609>:
C__3 := 10^(-10)*<8193990033,5298215096,1230614493>:

Cyc__30 := [ seq(seq((-1)^l*R__z(2*Pi*k/15),l=0..1), k=0..14) ]:
# these are the 90 vertices of the Noperthedron
L := map(convert, [seq(M . C__1, M in Cyc__30), seq(M . C__2, M in Cyc__30), seq(M . C__3, M in Cyc__30)], list):

# To actually draw the polyhedron from the vertices, we need to construct the polygons for the faces
H := ComputationalGeometry:-ConvexHull(L):

# A triangulation of the Noperthedron, which is almost okay: all faces except the top and bottom are not triangles
P := [seq((L[x]), x in H)]:

# remove the top and bottom triangles
R1,Q := selectremove(p-> {p[1][3], p[2][3], p[3][3]}={C__1[3]}, P):
R2,Q := selectremove(p-> {p[1][3], p[2][3], p[3][3]}={-C__1[3]}, Q):

# merge the top triangles
T := [map(p->p[1..2], {map(op, R1)[]})[]]:
ht := ComputationalGeometry:-ConvexHull(T):
Tp := [seq([p[], C__1[3]], p in T[ht])]:

# merge the bottom triangles
B := [map(p->p[1..2], {map(op, R2)[]})[]]:
hb := ComputationalGeometry:-ConvexHull(B):
Bp := [seq([p[], -C__1[3]], p in B[hb])]:

# the complete polyhedron:
NOP := [Tp, Q[], Bp]:

plots:-display(map(p->plottools:-polygon(p), NOP), axes=none, size=[1600,1600], scaling=constrained);

 

Featured Post

When we think about AI, most of us picture tools like ChatGPT or Gemini. However, the reality is that AI is already built into the tools we use every day, even something as familiar as a web search. And if AI is everywhere, then so are its mistakes.

A Surprising Answer from Google

Recently, I was talking with my colleague Paulina, Senior Architect at Maplesoft, who also manages the team that creates all the Maple Learn content. We were talking about Google’s AI Overview, and I said I liked it because it usually seemed accurate. She disagreed, saying she’d found plenty of errors. Naturally, I asked for an example.

Her suggestion was simple: search “is x + y a polynomial.”

So I did. Here’s what Google’s AI Overview told me:

“No, x + y is not a polynomial”

My reaction? HUH?!

The explanation correctly defined what a polynomial is but still failed to recognize that both x and y each have an implicit exponent of 1. The logic was there, but the conclusion was wrong.

Using It in the Classroom

This makes a great classroom example because it’s quick and engaging. Ask your students first whether x + y is a polynomial, then show them the AI result. The surprise sparks discussion: why does the explanation sound right but end with the wrong conclusion?

In just a few minutes, you’ve not only reviewed a basic concept but also reinforced the habit of questioning answers even when they look authoritative.

Why This Matters

As I said in a previous post, the real issue isn’t the math slip, it’s the habit of accepting answers without questioning them. It’s our responsibility to teach students how to use these tools responsibly, especially as AI use continues to grow. Critical thinking has always mattered, and now it’s essential.