The atom bomb is intrinsically neutral because it’s effect can be either beneficial or detrimental to a society based only on the intent of the user. An atom bomb used to deflect an asteroid which is on a collision course with Earth is put to good use. The intent is to save the planet, and if the bomb had never been designed and built, humanity would have ended.
Thus in constructing the atom bomb, the designers may have had either good or bad intent; however it is known that the designers did not have perfect knowledge. As such, by building the technology, the Earth was saved because a new use was found for the technology.
This is universally true. Any technology being considered is being built with imperfect knowledge. Applications for the technology can be either good or bad, however there will always and forever be applications which were not previously considered (due to limited knowledge).
Dangerous
This conversation mostly considered morality, however that same rationale applies to danger. Danger is the possibility of causing harm, injury, unpleasantness, or discomfort. Each of these are directly related to a moral argument. E.g., “Injury” dérives from the Latin word injuria which literally means “a wrong.”
Thus a thing is dangerous to the extent that it can effect “a wrong” (or injure someone). Neither a bomb, an artificial virus, a gun, a sexual movie, or a hangman’s noose can commit “a wrong.” Any person can use these tools to commit a wrong, however.
To be dangerous simply means being capable of causing “a wrong.” In that definition, cupcakes are dangerous if deliberately fed to a diabetic. A love letter is dangerous when sent to a mistress and shown to the clinically depressed wife.
More or less dangerous
And so we arrive at a scale of danger. Is an atom bomb intrinsically more or less dangerous than a cupcake?
No. The atom bomb placed in orbit to defend the planet is less dangerous than the cupcake placed in the crib of a diabetic baby.
Ultimately, the danger of any thing is determined entirely by the intent of the user. Intrinsically, nothing at all is dangerous unless it has a will. Only by knowing the will of the user can you know in advance if the technology is “likely to be” dangerous.