they are different ideas.
Raytracing is easieer to understand in theory, whet you do is you shoot a ray and search its intersection and find out its value.
Scan line works a bit differently whet you do is you organize your triangles in screen order prefeerably back to forward and draw them all (you can discard all backfacing one firstof leaving you only with 1/2 the data). Essentially, but theres lot of tricks you can do here so your datacount goes way down. But more importantly you sweep along the triangles along scan lines for values.
Now scanlines only work for primary visibility, so they don't go ahead and bounce anything, you can use scanline for the primary ray and then raytrace form that data forward.
The downside of the rather easily described tracing is that ist horribly hard to do efficiently because the data is incoherent and computers don't automatically sort out things, so you end up with huge sorting bins that tajke up a lot of space. Space thet you could use for something else.
Scan line on the other hand is straightforward, just arrange triangles in any order and paint them in, especially if you dont have transparency just draw them all and sort their order by depth. Scan line is how your gfx card also works. Its major upside is that you dont need everything in memory at all times you can load things as they arise (wich if you render a 100,000,000 element scene is a boon). But they cant sort out bounces. . Wich is why you end up with stuff like reflection passes, shadowmap passes.... but on the upside bacuse they are sampled coherently you can actually pre process them.
Most software renderer's are hybrids, where they can do BOTH since scan line wins when you dont need ray tracing in pure performance and memory allocation (off course since you allready trace you loose the memory point immediately, but because theres less of it you can make the store less performance oriented and smaller)