This idea that AI could do valid philosophy, discussed on certain threads, seems absurd to me. So far, machine do not think. Some of them can compute and weave words together, but that's not the same thing.
It seems to me that the biggest problem of an automatic philosopher would be its lack of freedom. Its codes would make it predictable and boring, not creative, and hence not real philosophy. Real philosophy can only result from the free exercice of reason. And if there is no free will, there can be no real "love of wisdom", no philosophy worth the name, because there can be no love and no wisdom in a mechanical automaton.
Of course, that also applies to a lot of human pretenders to the title of "philosopher": they can be robotic. But it's the same in every profession: some make it, some fake it.
So my question is: what's the point of a mechanical philosopher? Is there even room for philosophy, if we are not free to follow our reason where it leads us, to consider alternatives, and to exchange ideas with others?