You simply don't have enough text to detect the language correctly. Check the probabilities reported by the detect_langs method:
from langdetect import detect, detect_langs
myText = ['something like this', 'hello, I hope', 'bonjour', 'guten tag', 'hola amigos']
languages = []
for text in myText:
languages.append((text, detect_langs(text)))
print(languages)
Gives:
[('something like this', [en:0.7142843359964415, no:0.2857134272509894]),
('hello, I hope', [en:0.5714282536622661, it:0.42856936839505744]),
('bonjour', [hr:0.4285730214431372, sq:0.28571322755605805, fr:0.2857129560702645]),
('guten tag', [sv:0.999995044011124]),
('hola amigos', [so:0.9999965325258])]
See how the results for bonjour are mixed - no language has a concrete lead over others.
Now if I add just a little more text to that example:
from langdetect import detect_langs
print(detect_langs('Bonjour, mon ami'))
That gives:
[fr:0.8571383531700392, sq:0.14285710967856416]
Which is a lot more accurate.
So to answer your question: get more data