<table border="1" cellspacing="0" cellpadding="8">
    <tr>
        <th>Issue</th>
        <td>
            <a href=https://github.com/llvm/llvm-project/issues/138739>138739</a>
        </td>
    </tr>

    <tr>
        <th>Summary</th>
        <td>
            Tf.MaxPool3D not supported
        </td>
    </tr>

    <tr>
      <th>Labels</th>
      <td>
      </td>
    </tr>

    <tr>
      <th>Assignees</th>
      <td>
      </td>
    </tr>

    <tr>
      <th>Reporter</th>
      <td>
          GiuseppeSorrentino99
      </td>
    </tr>
</table>

<pre>
    Hello, I am trying to convert a tf network in TOSA, but seems one of the layers is not supported: 

```
output/tosa.mlir:46:25: error: operation being parsed with an unregistered dialect. If this is intended, please use -allow-unregistered-dialect with the MLIR tool used
    %43 = "tf.MaxPool3D"(%42) {data_format = "NDHWC", device = "", ksize = [1, 2, 2, 2, 1], padding = "VALID", strides = [1, 2, 2, 2, 1]} : (tensor<1x128x128x128x16xf32>) -> tensor<1x64x64x64x16xf32>
```

As the Conv3D is supported, is there a workaround for this problem? 

I attach here also the code for both the network and the sets of command used for reproducing the error: 

**NN**

```
import tensorflow as tf
from tensorflow.keras import layers, models
import tensorflow_addons as tfa

class SpatialTransformer(layers.Layer):
    """3D Spatial Transformer using batched 2D warps and static shape enforcement."""
    def call(self, inputs):
        vol, flow = inputs  # vol: [B,D,H,W,C], flow: [B,D,H,W,3]

        # 1. Enforce static (non-zero) shapes to satisfy TOSA requirements
        #    (TOSA dialect expects all dims ≥ 1 and statically known) 
        vol  = tf.ensure_shape(vol,  [None, vol.shape[1], vol.shape[2], vol.shape[3], vol.shape[4]])
        flow = tf.ensure_shape(flow, [None, flow.shape[1], flow.shape[2], flow.shape[3], 3])          

        # 2. Flatten depth dimension into batch: [B,D,H,W,C] → [B*D,H,W,C]
        shape = tf.shape(vol)
        B, D, H, W, C = shape[0], shape[1], shape[2], shape[3], vol.shape[4]
        vol_flat  = tf.reshape(vol,  tf.stack([B * D, H, W, C]))                                     
        flow_flat = tf.reshape(flow, tf.stack([B * D, H, W, C]))                                     

        # 3. Perform a single batched 2D warp via dense_image_warp,
        #    avoiding tf.map_fn loops entirely 
        moved_flat = tfa.image.dense_image_warp(vol_flat, flow_flat[..., :2])

        # 4. Restore original shape: [B*D,H,W,C] → [B,D,H,W,C]
        moved = tf.reshape(moved_flat, tf.stack([B, D, H, W, C]))                                     
        return moved

def conv_block(x, filters, convs=2, kernel_size=3, activation='relu'):
    for _ in range(convs):
        x = layers.Conv3D(filters, kernel_size, padding='same',
                          kernel_initializer='he_normal')(x)
        x = layers.Activation(activation)(x)
    return x

def build_minimal_voxelmorph(inshape,
                             enc_features=(16, 32, 32, 32),
                             dec_features=(32, 32, 32, 32, 32, 16, 16)):
    moving = layers.Input(shape=(*inshape, 1), name='moving')
    fixed  = layers.Input(shape=(*inshape, 1), name='fixed')
    x = layers.Concatenate(axis=-1)([moving, fixed])

    skips = []
    for f in enc_features:
        x = conv_block(x, f)
        skips.append(x)
        x = layers.MaxPool3D(2)(x)
```

which is converted in tosa through: 

    x = conv_block(x, enc_features[-1] * 2)

    for f, skip in zip(dec_features, reversed(skips)):
        x = layers.UpSampling3D(2)(x)
        x = layers.Concatenate(axis=-1)([x, skip])
        x = conv_block(x, f)

    flow  = layers.Conv3D(3, 3, padding='same', name='flow')(x)
    moved = SpatialTransformer(name='moved')([moving, flow])

    return models.Model(inputs=[moving, fixed],
                        outputs=[moved, flow],
                        name='VoxelmorphMinimalFlatten')

# Instantiate model for a 128³ volume                        
model = build_minimal_voxelmorph((128, 128, 128))
model.summary()


traduced through the following commands

```

docker run -u $(id -u):$(id -g) -v $(pwd):/working_dir --rm agostini01/soda \
  tf-mlir-translate \
    --graphdef-to-mlir \
    --tf-input-arrays=fixed,moving \
    --tf-input-data-types=DT_FLOAT,DT_FLOAT \
    --tf-input-shapes=1,128,128,128,1:1,128,128,128,1 \
    --tf-output-arrays=Identity,Identity_1 \
    $1 \
    -o output/tf.mlir

docker run -u $(id -u):$(id -g) -v $(pwd):/working_dir --rm agostini01/soda \
tf-opt \
  --tf-executor-to-functional-conversion \
  --tf-region-control-flow-to-functional \
  --tf-shape-inference \
  --tf-to-tosa-pipeline \
  output/tf.mlir \
  -o $2
```

In practice, what happens is that, while most of the operations are supported and traduced, the tf.MaxPool3D is not. Also, looking at the supported operators, it seems not supported. Thus I am wondering if there is a solution for this.

Thanks in advance for any support.





</pre>
<img width="1" height="1" alt="" src="http://email.email.llvm.org/o/eJy8WF9z4jqy_zTKS5ddIEMCD3kgMLknVTNzTp3JPeeRElYbdCNLvpJMkvn0Wy3ZgIFktnZrlyJ2kNSt_vNT_5HwXm0N4j2bPrDp6ka0YWfd_f-o1mPT4A_rHJqgjJ3PbzZWvt__hlpbxpfwBKKG4N6V2UKwUFqzRxdAQKjAYHi17gWUgefffyxo-aYN4BFrD9Yg2ArCDkGLd3QelAdjA_i2aawLKFmxADZa0Pd21H1HC9uGpg2MPwbrRV5r5VixmNyyYsGnRIHOWRoC26ATQVkDGyTpGuE8SnhVYQfCQGscbpUP6FCCVEJjGXJ4IolUlEWZgEaiJLEbjcIjtB4hE1rb1-yUPOvIE2_S6NvXpz8hWKuJRLLRAgCA8emkAFasgHEeqvybePvDWl2sGOeMz2iaMz4HdvcgRRDryrpahJ7g--q3v5dx5RIk7lWJ_Uw3-OLVz25s-jCmET58jNl0FZURUpJBOvK_Fl-fVh0PH5yS6H_J5W4F0Tt8FtB4svdy_Dbms-Pf7VtVcFZ8IYUyVnyBk4W3k-57XDX0MRstFj4acmnNvliRO46w4Ev6HXboEAQQwoSzrZFQWZe81zi70Viz4rED0BOIEES5g0SkvY3cSysxUm1s57gessLI-Ntj8ATT0tY1jZE7I4XDxlnZlhH3OzzCrkMsp-_37-l9iWJVkzKdUSptX0F4CBUbLSpn65Px_AWd8NCtTyeFLFBbidpf47QWUlrjE0ORti618B5-NCIooZ-dMJ7AhY7xWWKZf6UX43NWLA5wTcjixaqnhBNSaD3pvhGh3KEEvoJX4RofLeeDCKoEvxMNAprKuhJrNCE_8Oz2kFhBKbRmfOZRV9G1pmmDH0hCn73VNBtNRehMy0jKIs6R5acPD4wvCeO_Mb78m_HlsoM8kX2wpKAlo9OtiOU4hy9J7l4ZxmfGmuwnOkuYjrp5CnleBOWr9xjiwOH_t8pFZf0Zy_iaxVV9vMC3BsvgQWgNUtUe2BfOZnO2mML4xJBC63d4MfbVxPAwMApEa4QqR-Nbh-soF-Ozzl6k8XdrkP7fW52naTrZyTCnY_zKWHFlbEJjNDw_EeXgmEtRovH58lSUiOxzWQaD_NpgL016z-HwuXQgz-FRixDQgMQm7Mi-aDylA2WCTbj9BDXJFbdszrsViwtcHTdMQO-0H3jg1ES0DUT5iQ0QH1hGql6_UaffuWXOjfJL7wwgsq60CAecODyHCMkcRPlCOWj68ACMLy7E7Px9avNPPme4SAJc7N_j4j-y_xkcihz-QEeRCwRQ4NJ4HrpgrwRINB7XqhZbXNMg48vLYyz2VsUMGqq8Fs26MqCtbTxQjeRQv58aoLZ7lKcWEHlkn19uNeud1QM__Zg-5HkeD1Cx4IeDdybWJIc_0QfrEKxTW2WE7mBywPgFgi8wfhk7z_S4dOJRvSuuvAb4fxlJDkPrTJIj6R-zhzX79UbbuOdbNJzSoUuSNOlZsUr1ETqDek1VEitWBQ2JMqh9rBEZVWF3DnXL-N0g-VCyX1MJ64TZksaJ6Xl-eoum6ZJpqloI4kdZTrc_FmFpXy9qjPuegu3y07FQRlEyVj_RJfIdrg3VirqTPRpi_pFwi6PSfHZigXPCzt5vR1tvWqXlulZG1UKv9_YNdW1ds2N8pkwHiM81AAA05bpCEVqHPoo_G9_GmM6Hz_k_wUviOa9zLsNn2ome84EDa7vvC-LOSE8mNhmz7gitYn2-OGoJ4yQhGHJd9EJi0rmgw456Qwn_Jt_IZMD2HGulCGhEIHCKN0WWyBIbOoS9WMskzXn88C-qOdT7xxNPqK8I9UN3XSL-yvkbYi9ukIumQSN_gcyThmjGB3g8aw5ed6rcURPQtZsoSVRqCCHsnG23u2Ml_pmoA-WmD9k4BkW-AD6wUTRGzLsvqqGdfiqK1gP08SU43CO1mOReUvocZRf6_m_zQ9SNVmZ7ReWPAsunzn7rpbyoz37hrV5TquKuRrIi1V0fR65TxMbEfhmKjinkaicyOEoHyJ-DmHifYfiQGaglyr_RK4ak2EgUqw8OwWfRJV0yHIlT13nY-zPSoxZ_HQLktxQyu4L0cJZjn1jAk_FBmKBEwKRDBJyAMZ-xJWcPscFpa_xoRzZaJDKy7CcxmkItPZdw-ponWSKH3Ld1Ldw7SxOjrnENTsi2RNkfrtjwVlZr-0phs-uN_WWXS3nDli_owLUGshYYn5BnJGRtdzQOA9t4U7DvljSvsl_wSA25Mtu1VA6yjCq4rfVBGTUaM_7orRTApsklocpqrVwWCFiaDNrPAGTZ1olmJ7HKgo3LBpOhyiJiMuGceCfXd4F32WeHa4ulCCIL703MP6vn9ePX3xfPVEh1_16nSs0jK1ZjxpfJGYMnKxYfzJzzS0A9yvwkqQYN74wv-3_Xp0SMTwY8LBzv06p0m_Zfdxup0YSDVFEvfMOyDdaRq6rWlFShCJ2leB-7uMFyh1tlDU0HZ3VGB3VIOVwe7Z8pU6FDU-JwMtiMMknWqAa1MsfZc0sdySypzy_B_2SgcVRilTG3v-5EgF1MhT5dYaWq-XWnNB19H_ob0cPdpQfh8Hj3le6luuMYC-4dwulVYneJmsNC-3g_q60lL4AI6T7rwCltYVNxqvpb2cEFbA7Pu9anK95XayQ64qSq7u5NeeqlrG7jJWt_-ZYn1Z93wrx4SpZC7gUZOQY1896zzw_hJX1v5H0h58Vc3OD9-G5yO53ejWbzm929lPO7yWZWShxXJR9P7u7KyWY0FePZXN6hnN2oez7i09F0dDueTSaTeV6Usw2f3RXVbVXeTiYlm4ywFkrnWu_r3LrtjfK-xftxMbsr5jdabFD7_urb3dOqbNNuPZuMtPLBH-mCChrvn08NPrDYTev0_S6ExqczwPjjVoVdu8lLWzP-SIy6V9Y4-39YEp6iNJ7xx06g_T3_RwAAAP__N8vbRw">