Open source software is an important cornerstone to the HPC community, spanning all layers from the operating systems, middlewares, and actual applications. But the very intentionally open dynamic of open source communities make them also vulnerable to malicious actors.
In this case a group of researchers submitted a paper to the 42nd IEEE Symposium on Security and Privacy, for which they introduced small bugs to the Linux kernel to see if they would pass the review process. While the general threat model seems plausible and understanding the dynamics to finding potential mitigations worthwhile, the methodology also raises a number of ethical concerns. It has also raised controversy in the kernel developer community, leading to at least a temporary ban of all submissions from the University of Minnesota, along with a review looking to revert many changes earlier .
Now the researchers were aware of potential ethical problems of their research and also discussed how to ensure that no vulnerabilities would actually be merged. At the same time the ethical dimension of potentially wasting the time of volunteers and maintainers was overlooked.
The Department of Computer Science & Engineering has since issued a statement to investigate the research method and the process by which the research method was approved, and also determine appropriate remedial action and safeguards to prevent similar issues in the future.
https://creativecommons.org/2023/02/17/fair-use-training-generative-ai/ Once again, I disagree with Wolfson. In this case, copyright law says everything is copyrighted unless explicitly placed in the public domain. There are exceptions, but certainly…