One thing I liked about #EA was the emphasis on measuring the *effectiveness* of charitable work: are they helping people in the ways they claim to, how much help per dollar donated, etc. But for folks working in the #longtermism / #xrisk / #AIRisk space, how do they measure their effectiveness? How do you know if you've made it less likely that an evil AI will turn us all into paperclips, or made it more likely - or done nothing at all?