THE MINTING STANDARD
a set of standards for minting physical Bitcoin.


PHYSICAL BITCOIN RECEIPT STANDARDIZATION 

A Physical Bitcoin receipt is digital proof of bitcoin transformation. It is neither a digital artifact nor an NFT that someone can own. Rather, it testifies that the noted amount of bitcoin sent to a burn address wasn't lost, but cryptographically transferred to a one-of-a-kind object in the real world. 

All physical bitcoin receipts reference three standards for authentication:

1. The Natural Standard: The natural physical pattern used to derive the burn address.
2. The Money Standard: A marking on the natural standard, noting the amount of bitcoin accounted to it.  
3. The Minting Standard: An immutable record of the open-source system used to derive the Bitcoin burn address from the Natural Standard.

Physical bitcoin authenticity requirements:  

1. The Bitcoin network must, at minimum, account for the bitcoin noted at its physical burn address.
2. Its Natural Standard must be naturally unique.
3. Its Minting Standard must be able to reproduce the same burn address from the natural standard.

Receipts are inscribed with the physical bitcoin burn transaction, which means that the money and minting standards recorded in the inscription can never change. If future mints want to utilize the same Minting Standard, they can reference this inscription in their receipt. 

Natural Standards don't have digital immutability because they are preserved physically. So receipts don't inscribe the image on-chain. Instead, they reference a digital copy of it off-chain. It is the physical bitcoin owner's responsibility to keep an exact copy of the original natural standard in case the public version disappears. The backup will save them the hassle of recreating the exact same image from the physical bitcoin.    


MINTING STANDARDIZATION

This minting standard uses image recognition software and a hashing function to transform images into burn addresses. 

How it works:
  
1. Image Preprocessing: The script loads the image in grayscale and applies histogram equalization. This step normalizes the lighting across the image, making the feature detection process less sensitive to lighting variations.

2. Feature Detection: The script uses the Scale-Invariant Feature Transform (SIFT) to detect keypoints and compute descriptors in the image. SIFT identifies points of interest in the image (like edges, corners, etc.) that are consistent across different scales and rotations, generating a "feature vector" for each keypoint. These feature vectors capture the essence of the image patterns around each keypoint and are designed to be invariant to changes in scale, rotation, and illumination.

3. Feature Vectors: Feature vectors are numerical representations of important attributes of patches of the image around each keypoint. They encode information like the orientation and scale of gradients which helps in recognizing patterns or specific features within the image.

4. Hash Generation: The feature vectors are flattened into a single long vector, representing the entire image. This vector is then converted into a byte array and hashed using SHA-256, generating a unique fingerprint (hash) of the image's features. The script then ensures this hash doesn't contain '0's or 'I's (likely for readability and consistency in the address) and truncates it to 19 characters.

5. Burn Address Creation: The script concatenates "1BtcMint", the 19-character hash, and "XXXXXXX" to form a template. This template is processed using Base58 encoding and decoding (a common encoding scheme in Bitcoin), along with additional checksum generation to construct a valid Bitcoin address. The "burn" function translates this template into a Bitcoin address format, ensuring it has the correct checksum and is a valid address on the Bitcoin network.

6. Address Validation: Finally, the script validates the generated Bitcoin address to ensure it conforms to the expected format for Bitcoin addresses.

The output is a Bitcoin burn address that's deterministically derived from the natural standard's feature vectors. This means that the same natural standard will always generate the same address, which proves that the address was generated without a corresponding private key.

Burn address authentication steps:

Step 1: Import the following libraries to your Python environment: cv2, numpy, hashlib, 
matplotlib.pyplot, binascii, and base58.
Step 2: Upload The Natural Standard image file to a Jupyter Notebook environment like Google Colab.
Step 3: Copy the BTCBurnAddressGeneratorScript.
Step 4: Paste the BTCBurnAddressGeneratorScript and edit the script to ensure the 
'THE NATURAL STANDARD IMAGE PATH HERE' matches your uploaded file path.
Step 5: Run the script and verify authenticity by comparing the BTC Burn Address output 
with the address of the inscription. An exact match confirms that the address was derived 
from the natural standard.

Physical authentication steps:

Step 1: Take a picture of the physically minted satoshi and crop a selection that matches 
the exact dimensions of The Natural Standard.
Step 2: Desaturate the image and increase the contrast and levels to match.
Step 3: Import the following libraries to a Jupyter Notebook like Google Colab: cv2, numpy.
Step 4: Upload the The Natural Standard and your new image to the Jupyter Notebook.
Step 5: Copy the ImageVerificationScript.
Step 6: Paste the ImageVerificationScript into the Jupyter Notebook and edit the script 
to ensure the two file paths are correct.
Step 7: Run the ImageVerificationScript. The script will tell you if the two images match 
the same natural standard by evaluating the number of inliers between the images.
"""

#BTCBurnAddressGeneratorScript

!pip install opencv-python==4.8.0.76 opencv-contrib-python==4.8.0.76 base58==2.1.1

import cv2
import numpy as np
import hashlib
from matplotlib import pyplot as plt
import binascii
import base58  # Ensure this import is included

def compute_sift_features(image_path):
    image = cv2.imread(image_path, cv2.IMREAD_GRAYSCALE)
    if image is None:
        raise ValueError("Image not loaded properly")
    
    # Histogram equalization for lighting normalization
    image = cv2.equalizeHist(image)
    
    # Adjust SIFT parameters for better robustness
    sift = cv2.SIFT_create(contrastThreshold=0.04, edgeThreshold=10, nfeatures=200)
    
    keypoints, descriptors = sift.detectAndCompute(image, None)
    if descriptors is None:
        return np.array([]), []
    
    feature_vector = descriptors.flatten()
    return feature_vector, cv2.drawKeypoints(image, keypoints, None)

def generate_feature_vector_hash(feature_vector):
    adjustment_counter = 0
    while True:
        adjusted_feature_vector = np.roll(feature_vector, adjustment_counter)
        feature_vector_bytes = adjusted_feature_vector.tobytes()
        hash_object = hashlib.sha256(feature_vector_bytes)
        hash_hex = hash_object.hexdigest()[:19]
        if '0' not in hash_hex and 'I' not in hash_hex:
            return hash_hex
        adjustment_counter += 1

def b58ec(s):
    unencoded = bytearray.fromhex(s)
    encoded = base58.b58encode(unencoded)
    return encoded.decode('ascii')

def b58dc(encoded, trim=0):
    unencoded = base58.b58decode(encoded)[:-trim]
    return unencoded

def burn(s):
    decoded = b58dc(s, trim=4)
    decoded_hex = binascii.hexlify(decoded).decode('ascii')
    check = hh256(decoded)[:8].decode('ascii')
    coded = decoded_hex + check
    return b58ec(coded)

def hh256(s):
    s = hashlib.sha256(s).digest()
    return binascii.hexlify(hashlib.sha256(s).digest())

def validate_btc_address(address):
    try:
        decoded = base58.b58decode_check(address)
        return decoded[0] == 0x00
    except Exception:
        return False

# Example usage
image_path = 'THE NATURAL STANDARD IMAGE PATH HERE'
feature_vector, image_with_keypoints = compute_sift_features(image_path)
feature_vector_hash = generate_feature_vector_hash(feature_vector)
print("Feature vector hash:", feature_vector_hash)
plt.imshow(image_with_keypoints, cmap='gray')
plt.show()

template = f"1BtcMint{feature_vector_hash}XXXXXXX"
burn_address = burn(template)
print("BTC Burn Address:", burn_address)

address_valid = validate_btc_address(burn_address)
print("Address formatting valid:", address_valid)



#ImageVerificationScript

import cv2
import numpy as np

# Load the images
image1 = cv2.imread('THE NATURAL STANDARD IMAGE PATH HERE', cv2.IMREAD_GRAYSCALE)
image2 = cv2.imread('YOUR IMAGE PATH HERE', cv2.IMREAD_GRAYSCALE)

# Initialize SIFT detector
sift = cv2.SIFT_create()

# Find the keypoints and descriptors with SIFT
keypoints1, descriptors1 = sift.detectAndCompute(image1, None)
keypoints2, descriptors2 = sift.detectAndCompute(image2, None)

# Create FLANN matcher
index_params = dict(algorithm=1, trees=5)
search_params = dict(checks=50)
flann = cv2.FlannBasedMatcher(index_params, search_params)

# Match descriptors
matches = flann.knnMatch(descriptors1, descriptors2, k=2)

# Apply Lowe's ratio test
good_matches = [m for m, n in matches if m.distance < 0.7 * n.distance]

# Estimate homography
if len(good_matches) > 4:
    src_pts = np.float32([keypoints1[m.queryIdx].pt for m in good_matches]).reshape(-1, 1, 2)
    dst_pts = np.float32([keypoints2[m.trainIdx].pt for m in good_matches]).reshape(-1, 1, 2)

    M, mask = cv2.findHomography(src_pts, dst_pts, cv2.RANSAC, 5.0)
    matchesMask = mask.ravel().tolist()

    # Count the number of inliers
    inliers = np.sum(matchesMask)
    print(f"Number of inliers: {inliers}")

    # Determine if it's a good match
    if inliers > 150:
        print("The images are a good match.")
    else:
        print("The images are not a match.")
else:
    print("Not enough matches are found - {}/{}".format(len(good_matches), 4))