Trained AIs like neural nets (there are other methods as well) are already beyond "simple if-then programming". Data is stored in a structure that produces interesting outputs, but humans can't correlate the data with the output.
An AI doesn't usually have an inner voice. It can't think something to itself and respond to itself. Though
Auto-GPT is a step in that direction.
An AI can't listen to its environment and decide when to respond or when not to. It also can't randomly have a thought, and work that though through to a point where it says something interesting out of the blue. It's a stimulus-response machine. This one I don't know how to solve.
Large language model AIs don't currently have the ability to form new long-term memories. I have some ideas along that line, but LLMs are always likely to be limited in the new information they can retain. Whether this is important for sapience is debatable, but I think it is.
I think a LLM with memory, multiple voices, and the capability to filter through great amounts of input - including those of its other voices - and decide when to respond
might be in the vicinity of sapience.
Memory plus random thoughts should be designed to lead to questions, the way a toddler asks why constantly. This leads toward sapience for humans; it might for AIs as well.