Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 12 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,17 @@
# Change Log for SD.Next

## Update for 2025-05-12

Curious how your system is performing?
Run a built-in benchmark and compare to over 15k unique results world-wide: (Benchmark data)[https://vladmandic.github.io/sd-extension-system-info/pages/benchmark.html]!
From slowest 0.02 it/s running on 6th gen CPU without acceleration up to 275 it/s running on tuned GH100 system!

- **Wiki**
- Updates for: *WSL, ZLUDA, ROCm*
- **Compute**
- NNCF: added experimental support for direct INT8 MatMul


## Update for 2025-05-12

### Highlights for 2025-05-12
Expand Down
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
<div align="center">
<img src="https://github.com/vladmandic/sdnext/raw/master/html/logo-transparent.png" width=200 alt="SD.Next">

**Image Diffusion implementation with advanced features**
# SD.Next: All-in-one WebUI for AI generative image and video creation

![Last update](https://img.shields.io/github/last-commit/vladmandic/sdnext?svg=true)
![License](https://img.shields.io/github/license/vladmandic/sdnext?svg=true)
Expand Down Expand Up @@ -29,9 +29,9 @@ All individual features are not listed here, instead check [ChangeLog](CHANGELOG
- Multiple UIs!
▹ **Standard | Modern**
- Multiple [diffusion models](https://vladmandic.github.io/sdnext-docs/Model-Support/)!
- Built-in Control for Text, Image, Batch and video processing!
- Built-in Control for Text, Image, Batch and Video processing!
- Multiplatform!
▹ **Windows | Linux | MacOS | nVidia | AMD | IntelArc/IPEX | DirectML | OpenVINO | ONNX+Olive | ZLUDA**
▹ **Windows | Linux | MacOS | nVidia CUDA | AMD ROCm | IntelArc/IPEX | DirectML | OpenVINO | ONNX+Olive | ZLUDA**
- Platform specific autodetection and tuning performed on install
- Optimized processing with latest `torch` developments with built-in support for model compile, quantize and compress
Compile backends: *Triton | StableFast | DeepCache | OneDiff | TeaCache | etc.*
Expand Down
10 changes: 5 additions & 5 deletions javascript/extraNetworks.js
Original file line number Diff line number Diff line change
Expand Up @@ -6,11 +6,11 @@ let totalCards = -1;

const getENActiveTab = () => {
let tabName = '';
if (gradioApp().getElementById('txt2img_prompt').checkVisibility()) return 'txt2img';
if (gradioApp().getElementById('img2img_prompt').checkVisibility()) return 'img2img';
if (gradioApp().getElementById('control_prompt').checkVisibility()) return 'control';
if (gradioApp().getElementById('video_prompt').checkVisibility()) return 'video';
if (gradioApp().getElementById('framepack_prompt_row').checkVisibility()) return 'framepack';
if (gradioApp().getElementById('txt2img_prompt')?.checkVisibility()) return 'txt2img';
if (gradioApp().getElementById('img2img_prompt')?.checkVisibility()) return 'img2img';
if (gradioApp().getElementById('control_prompt')?.checkVisibility()) return 'control';
if (gradioApp().getElementById('video_prompt')?.checkVisibility()) return 'video';
if (gradioApp().getElementById('framepack_prompt_row')?.checkVisibility()) return 'framepack';
// legacy method
if (gradioApp().getElementById('tab_txt2img').style.display === 'block') tabName = 'txt2img';
else if (gradioApp().getElementById('tab_img2img').style.display === 'block') tabName = 'img2img';
Expand Down
3 changes: 2 additions & 1 deletion modules/lora/lora_apply.py
Original file line number Diff line number Diff line change
Expand Up @@ -151,7 +151,8 @@ def network_add_weights(self: Union[torch.nn.Conv2d, torch.nn.Linear, torch.nn.G
new_weight = dequant_weight.to(devices.device, dtype=torch.float32) + lora_weights.to(devices.device, dtype=torch.float32)
self.weight = torch.nn.Parameter(new_weight, requires_grad=False)
self.pre_ops.pop("0")
self = nncf_compress_layer(self, num_bits, is_asym_mode, torch_dtype=devices.dtype, quant_conv=shared.opts.nncf_quantize_conv_layers)
self._custom_forward_fn = None
self = nncf_compress_layer(self, num_bits, is_asym_mode, torch_dtype=devices.dtype, quant_conv=shared.opts.nncf_quantize_conv_layers, group_size=shared.opts.nncf_compress_weights_group_size, use_int8_matmul=shared.opts.nncf_decompress_int8_matmul)
self = self.to(device)
del dequant_weight
except Exception as e:
Expand Down
6 changes: 5 additions & 1 deletion modules/model_quant.py
Original file line number Diff line number Diff line change
Expand Up @@ -118,7 +118,11 @@ def create_nncf_config(kwargs = None, allow_nncf: bool = True, module: str = 'Mo
diffusers.quantizers.auto.AUTO_QUANTIZATION_CONFIG_MAPPING["nncf"] = NNCFConfig
transformers.quantizers.auto.AUTO_QUANTIZATION_CONFIG_MAPPING["nncf"] = NNCFConfig

nncf_config = NNCFConfig(weights_dtype=shared.opts.nncf_compress_weights_mode.lower())
nncf_config = NNCFConfig(
weights_dtype=shared.opts.nncf_compress_weights_mode.lower(),
group_size=shared.opts.nncf_compress_weights_group_size,
use_int8_matmul=shared.opts.nncf_decompress_int8_matmul,
)
log.debug(f'Quantization: module="{module}" type=nncf dtype={shared.opts.nncf_compress_weights_mode}')
if kwargs is None:
return nncf_config
Expand Down
Loading