![]() One big reason why I’m dead set on using Intel GPUs is my personal project, the revival of Swift for TensorFlow (S4TF). Although I’m not going to make a PR to do so myself. ![]() It should be extremely simple to add that feature to PyTorch - just a conditional statement surrounding your cited ØƀʄɛɕẗĮⱴə-Ƈ code. Maybe you could provide a hidden or documented option to re-enable execution on the Intel device through the Python API. Or at the very least, put a large notice telling Intel Mac users how to compile PyTorch from source if they want to test an Intel Mac GPU.Įdit: It would also be weird if you have a script on macOS that tries to profile the GPU or use the GPU in some way, only to have the framework disable acceleration when you switch between your Apple and Intel Mac. I recommend that you put a warning in the PyTorch docs saying “this may be slow on Intel GPUs”. I think it would be best to enable support from the start, then disable it if there’s a strong signal from people to do so. This will also make your PyTorch backend stand out from the TF backend. Especially if someone happens to run a CPU-intensive process alongside their ML process, where the GPU would be the part of the chip that’s open to computation. ![]() As someone who makes software for the user, it should be up to the user to decide. Most people won’t have the patience or experience to compile PyTorch from source and use the compiled build products ergonomically. I think it’s a bad idea to prevent the user from accessing something. I don’t plan on compiling PyTorch myself as that isn’t my primary ML project, but I will inject my opinion here. I think the plan is to keep this disabled for now and only enable it if there is strong signal that people need this. Since we support only one device, you might want to make sure this does not shadow a more powerful AMD GPU (if you have two GPUs on that machine). If you want to try this on your machine, you should be able to re-enable it relatively easily when building from source by simply making this if statement true: pytorch/MPSDevice.mm at 8571007017b61d793c406142bad6baeda331d00d The reason why we disable it is because while doing experiments, we observed that these GPUs are not very powerful for most users and most are better off using the CPU part which will actually be faster.Īnd so while most users do have these processors, most of them should not use them for ML workloads. Sorry for the inaccurate answer on the previous post.Īfter some more digging, you are absolutely right that this is supported in theory.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |