Normal networks are the default. These are used for standard feedforward and simple-recurrent (Elman) networks. In the forward pass that occurs on each tick in such a network, each group in order will update its inputs and then immediately its outputs. Therefore, on a single tick, information can propagate from the first group to the last. When training a normal network, error derivatives are backpropagated on each tick and no history of outputs and inputs is needed.
In a continuous network, on the other hand, a forward pass is "synchronous". First all unit outputs are reset to their initOutput if the group's resetOnExample flag is true. Then all of groups calculate their inputs. Finally all of the groups calculate their outputs based on their inputs. Therefore, information will not propagate through the entire network on each tick. It will only propagate the distance of a single link projection.
Continuous networks were given that name because typically a continuous network would contain groups that continuously integrate their inputs or outputs over time. However, a continuous network could be given a dt of 1.0 or non-integrating groups, in which case it would not really be "continuous" per se. Such a network is typically called recurrent backprop-through-time (RBTT). In Lens it is just considered a special case of a continuous network.
A network is by default non-continuous. You can make a continuous network by calling addNet with the CONTINUOUS network type.