Memory Optimization Flags
Uddin-Lang provides production-ready memory layout optimizations to improve performance in memory-constrained environments, long-running applications, and production workloads.
Overview
The memory optimization feature enables several low-level optimizations designed to reduce memory footprint and improve cache performance. These optimizations are particularly beneficial for enterprise applications, web servers, and data processing workloads.
Available Flags
Uddin-Lang offers two memory optimization modes:
--memory-optimize-stable- Production-ready optimizations (recommended)--memory-optimize-experimental- All optimizations including experimental features (testing only)
Stable Mode (--memory-optimize-stable)
Recommended for production use. This mode enables proven, thread-safe optimizations.
Usage
./uddinlang --memory-optimize-stable script.din
Enabled Features
When using --memory-optimize-stable, the following optimizations are enabled:
✅ Tagged Values
- Purpose: Reduces memory overhead by using compact value representation
- Benefit: 30-50% reduction in memory usage for small values (integers, booleans)
- How it works: Uses tagged union representation instead of Go's
interface{}(16 bytes) - Best for: Applications with many integer/boolean values, memory-constrained environments
✅ Compact Environment
- Purpose: Thread-safe, memory-efficient variable storage
- Benefit: Better memory locality, reduced pointer indirection
- How it works: Uses optimized data structures for variable scopes with mutex protection
- Best for: Applications with many variables, concurrent execution, production environments
✅ Cache-Friendly Structures
- Purpose: Optimizes large arrays and maps for CPU cache performance
- Benefit: Better cache hit rates, faster sequential access patterns
- How it works: Uses cache-aligned data structures for arrays >100 elements and maps >50 elements
- Best for: Applications with large data structures, bulk operations
✅ Memory Leak Detection
- Purpose: Automatically detects and tracks memory leaks
- Benefit: Identifies memory issues in long-running applications
- How it works: Tracks all allocations and detects unreleased memory
- Best for: Long-running applications, production monitoring
❌ Variable Lookup Cache (Disabled)
- Status: Not enabled in stable mode (still experimental)
- Reason: May have thread safety issues in some scenarios
Benefits
- 15-30% reduction in memory consumption
- Improved CPU cache performance (better cache hit rates)
- Thread-safe operations for concurrent workloads
- Memory leak detection for long-running applications
- Production-ready and thoroughly tested
Best Practices for Stable Mode
-
Use for Production Applications
# Web server or long-running service
./uddinlang --memory-optimize-stable server.din -
Monitor Memory Usage
- Use alongside profiling to track memory improvements
- Monitor for any unusual memory growth patterns
-
Enable for Memory-Constrained Environments
# Container with memory limits
./uddinlang --memory-optimize-stable data_processor.din -
Use for Variable-Heavy Code
- Applications with many variables benefit significantly
- Code with deep nested scopes sees improved performance
Experimental Mode (--memory-optimize-experimental)
⚠️ Warning: For testing and development only. Not recommended for production.
This mode enables all optimizations including experimental features that may have stability issues.
Usage
./uddinlang --memory-optimize-experimental script.din
Enabled Features
Experimental mode includes everything from stable mode, plus:
✅ Variable Lookup Cache (Experimental)
- Purpose: Caches variable lookups to reduce scope chain traversal
- Benefit: Faster variable access in loops and recursive functions
- Warning: May not be thread-safe in all concurrent scenarios
- Best for: Testing, development, single-threaded workloads
⚠️ Aggressive Memory Leak Detection
- Status: More aggressive leak detection (30-second intervals)
- Warning: May impact performance due to frequent checks
When to Use Experimental Mode
✅ Use for:
- Development and testing
- Benchmarking optimization effects
- Experimenting with new features
- Single-threaded applications
❌ Avoid for:
- Production environments
- Multi-threaded applications
- Critical systems requiring stability
- Long-running production services
Comparison: Normal vs Optimized
| Aspect | Normal Mode | Stable Mode | Experimental Mode |
|---|---|---|---|
| Value Storage | interface (16 bytes) | TaggedValue (compact) | TaggedValue (compact) |
| Variable Lookup | Direct scope traversal | Direct scope traversal | Cached lookups |
| Memory Layout | Standard Go layout | Cache-aligned structures | Cache-aligned structures |
| Memory Tracking | None | Full leak detection | Aggressive leak detection |
| Thread Safety | Basic | Full thread-safe | Partial (cache may not be thread-safe) |
| Performance | Good | Better (memory-constrained) | Best (single-threaded) |
| Stability | High | High | Medium (experimental) |
| Production Ready | ✅ Yes | ✅ Yes | ❌ No |
Use Cases
✅ Recommended Use Cases
1. Long-Running Applications
# Web servers, background workers, daemon processes
./uddinlang --memory-optimize-stable web_server.din
Why? Memory leak detection and thread-safe operations are essential for long-running processes.
2. Memory-Constrained Environments
# Embedded systems, containers with memory limits
./uddinlang --memory-optimize-stable data_processor.din
Why? Tagged values and compact structures significantly reduce memory footprint.
3. Production Workloads
# High-availability systems, enterprise applications
./uddinlang --memory-optimize-stable business_logic.din
Why? Production-ready optimizations with full thread safety and leak detection.
4. Variable-Heavy Code
# Code with many variables, deep nested scopes
./uddinlang --memory-optimize-stable rule_engine.din
Why? Compact environment and cache-friendly structures optimize variable access.
❌ Not Recommended Use Cases
1. Simple One-Off Scripts
# Don't use for simple scripts
./uddinlang simple_script.din # Just use default mode
Why? Overhead of optimization isn't worth it for quick scripts.
2. Performance-Critical CPU-Bound Tasks
# For real-time systems where every CPU cycle counts
./uddinlang real_time_processor.din # Use default mode
Why? Tracking overhead may slightly impact CPU-bound operations.
3. Testing/Development (Stable Mode)
# For testing, use experimental mode if needed
./uddinlang --memory-optimize-experimental test.din
Why? Experimental mode includes additional features for testing.
Programmatic Usage
Go API
import "github.com/bonkzero404/uddin-lang/interpreter"
// Stable mode
config := interpreter.StableConfig()
stats, err := interpreter.Execute(program, config)
// Experimental mode
config := interpreter.ExperimentalConfig()
stats, err := interpreter.Execute(program, config)
// Custom configuration
config := &interpreter.Config{
MemoryLayout: interpreter.StableMemoryLayoutConfig(),
}
stats, err := interpreter.Execute(program, config)
Using Engine API
For stable mode, use NewWithConfig() with StableConfig():
import "github.com/bonkzero404/uddin-lang"
import "github.com/bonkzero404/uddin-lang/interpreter"
// Stable mode
engine := uddin.NewWithConfig(interpreter.StableConfig())
result, err := engine.ExecuteString(code)
// Experimental mode (testing only)
engine := uddin.NewWithConfig(interpreter.ExperimentalConfig())
result, err := engine.ExecuteString(code)
Note: The EnableMemoryOptimization() method exists but does not enable optimizations (it sets default config). Use NewWithConfig() with StableConfig() or ExperimentalConfig() instead.
Performance Comparison
Memory Usage
Typical memory reductions when using --memory-optimize-stable:
- Small values (int, bool): 50-70% reduction
- Variable storage: 20-30% reduction
- Large arrays: 15-25% reduction (via cache-friendly structures)
- Overall: 15-30% reduction in total memory usage
CPU Performance
- Sequential access: 10-20% improvement (cache-friendly structures)
- Variable lookup: Similar or slightly better (compact environment)
- Memory allocation: 20-30% reduction in GC pressure
Real-World Example
# Without optimization
$ ./uddinlang --profile data_processor.din
Memory used: 125 MB
Execution time: 2.3s
GC pauses: 12 (total: 340ms)
# With stable optimization
$ ./uddinlang --memory-optimize-stable --profile data_processor.din
Memory used: 92 MB (26% reduction)
Execution time: 2.1s (8% faster)
GC pauses: 8 (total: 210ms) (38% reduction)
Best Practices
1. Choose the Right Mode
- Production: Always use
--memory-optimize-stable - Testing: Use
--memory-optimize-experimentalif needed - Development: Default mode is usually sufficient
2. Profile Before and After
Always compare performance with profiling:
# Baseline
./uddinlang --profile script.din > baseline.txt
# With optimization
./uddinlang --memory-optimize-stable --profile script.din > optimized.txt
# Compare results
diff baseline.txt optimized.txt
3. Monitor Memory Leaks
In stable mode, memory leak detection is enabled automatically:
# Check for memory leaks in long-running applications
./uddinlang --memory-optimize-stable long_running_service.din
# Leaks will be automatically detected and logged
4. Use for Large Datasets
Optimizations are most effective with:
- Arrays with >100 elements
- Maps with >50 key-value pairs
- Applications processing large amounts of data
5. Thread Safety
- Stable mode: Fully thread-safe, safe for concurrent operations
- Experimental mode: May have thread safety issues with variable lookup cache
- Always use stable mode for concurrent/multi-threaded applications
6. Memory-Constrained Environments
For containers or embedded systems:
# Set memory limits and use stable mode
docker run --memory=512m myapp \
./uddinlang --memory-optimize-stable app.din
Troubleshooting
Issue: No Performance Improvement
Problem: Not seeing expected memory reduction.
Solutions:
- Check if your workload benefits from optimization (large arrays/maps, many variables)
- Profile both versions to see actual improvements
- Ensure you're using stable mode (not experimental, unless testing)
Issue: Unexpected Behavior
Problem: Code behaves differently with optimization enabled.
Solutions:
- Verify you're using
--memory-optimize-stable(not experimental) - Check for any thread-safety issues in your code
- Test thoroughly before deploying to production
Issue: Memory Leak Warnings
Problem: Getting memory leak detection warnings.
Solutions:
- Review the reported leaks in logs
- Check for circular references or unclosed resources
- Use stable mode's automatic cleanup (enabled by default)
Examples
Example 1: Web Server with Memory Optimization
#!/bin/bash
# Start web server with memory optimization
./uddinlang --memory-optimize-stable web_server.din
Example 2: Data Processing Script
#!/bin/bash
# Process large dataset with memory optimization
./uddinlang --memory-optimize-stable --profile data_processor.din
Example 3: Container Deployment
FROM alpine:latest
COPY uddinlang /usr/local/bin/
COPY app.din /app/
CMD ["/usr/local/bin/uddinlang", "--memory-optimize-stable", "/app/app.din"]
Differences from Memory Pools
It's important to understand that Memory Optimization Flags are different from Memory Pools:
| Feature | Memory Pools | Memory Optimization Flags |
|---|---|---|
| Status | Always active | Requires flag |
| Purpose | Reuse allocations | Optimize memory layout |
| Focus | Allocation reuse | Memory footprint reduction |
| Overhead | Minimal | Slight (tracking, conversion) |
| Use Case | All scenarios | Specific scenarios |
Memory Pools are always active and handle allocation reuse automatically. Memory Optimization Flags are optional and provide additional memory layout optimizations for specific use cases.
Summary
Quick Decision Guide
Use --memory-optimize-stable if:
- ✅ Running production applications
- ✅ Long-running services (web servers, daemons)
- ✅ Memory-constrained environments
- ✅ Applications with large data structures
- ✅ Need memory leak detection
Use default mode if:
- ✅ Simple one-off scripts
- ✅ Quick prototypes
- ✅ Learning/testing code
- ✅ CPU-bound real-time systems
Use --memory-optimize-experimental only for:
- ⚠️ Testing and development
- ⚠️ Benchmarking
- ⚠️ Single-threaded workloads
- ❌ Never in production
Additional Resources
Remember: Always use --memory-optimize-stable for production applications. Experimental mode is for testing only!