Images Compose

Cover Page

DUE Wed, 03/19, 2 pm

The goals of this lab are threefold: first, to introduce you to integration of Android View APIs with Compose; two, to use Android View’s Intent and ExoPlayer to add and manipulate images and videos in Chatter; and three, to use OkHttp3 to upload multipart/form-data asynchronously.

Images and videos can be uploaded to the server either by picking one from the device’s photo album or by taking a picture/video with the device’s camera. On the posting screen, we will want a button to access the album, one for taking photo, another for recording video, and a preview of the images to be posted. On the main screen showing the chatt timeline, we will want posted images and videos to be downloaded and displayed alongside their corresponding chatts.

Expected behavior

Post an image and a video:

DISCLAIMER: the video demo shows you one aspect of the app’s behavior. It is not a substitute for the spec. If there are any discrepancies between the demo and the spec, please follow the spec. The spec is the single source of truth. If the spec is ambiguous, please consult the teaching staff for clarification.

Setting up the back end

If you haven’t modified your back end to handle audio, please go ahead and do so now:

Once you’ve updated your back end, return here to continue work on your front end.

Preparing your GitHub repo

:point_right: Go to the GitHub website to confirm that your folders follow this structure outline:

  441
    |-- # files and folders from other labs . . .  
    |-- images
        |-- composeChatter
            |-- app
            |-- gradle
    |-- # files and folders from other labs . . .       

If the folders in your GitHub repo does not have the above structure, we will not be able to grade your labs and you will get a ZERO.

Declaring dependencies

We will be adding three dependencies: OkHttp, a third-party library for uploading multipart/form-data; Coil, another third-party library for downloading and displaying images; and Exoplayer, a part of Google’s Media3 library for downloading and playing back videos.

Add the following dependencies to your app build file:

    dependencies {
        // . . .
        implementation("androidx.media3:media3-exoplayer:1.1.0")
        implementation("androidx.media3:media3-ui:1.1.0")
        implementation("io.coil-kt:coil-compose:2.7.0")     
        implementation("com.squareup.okhttp3:okhttp:5.0.0-alpha.14")
    }

Adding camera feature and requesting permissions

Our application will make use of the camera feature. Navigate to your AndroidManifests.xml file and add the following inside the <manifest...> ... </manifest> block.

    <uses-feature android:name="android.hardware.camera.any"
        android:required="false" />

Setting android:required="false" let users whose devices don’t have a camera to continue to use the app. However, we would then have to manually check at run time whether a camera is present and if not, disable picture and video taking.

Next we must declare we will be asking user’s permission to access the device’s camera, mic, and image gallery. Add these permission tags to your app’s AndroidManifest.xml file. Find android.permisssion.INTERNET and add the following lines right below it:

    <uses-permission android:name="android.permission.CAMERA" />
    <uses-permission android:name="android.permission.RECORD_AUDIO" />
    <uses-permission android:name="android.permission.READ_MEDIA_IMAGES" />
    <uses-permission android:name="android.permission.READ_MEDIA_VIDEO" />

Without these permission tags, we wouldn’t be able to prompt the user for permission later on.

We also need to declare that we will be quering for image cropping capability from external Activities. Add the following to your AndroidManifest.xml, for example before the <application...> ... </application> block:

    <queries>
        <intent>
            <action android:name="com.android.camera.action.CROP" />
            <data android:mimeType="image/*" />
        </intent>
    </queries>

Inside the <application block, above android:networkSecurityConfig line, add:

        android:enableOnBackInvokedCallback="true"

This allows us to specify BackHandler() later.

Adding resources

We add some string constants to /app/res/values/strings.xml:

    <string name="album">Album</string>
    <string name="camera">Camera</string>
    <string name="video">Video</string>

Next, add the following Vector Asset icons:

Note that these are all Outline vector assets, not filled ones. See the audio lab spec to review how to add Vector Asset.

As in the previous lab, we’ll collect all globally visible extensions in one file. Create a new Kotlin file called Extensions.kt and put in it the same toast() extension to Context from the previous lab.

Working with images and videos

MainActivity

First let’s allocate some scratch space to hold the images and videos we will be working with. To retain these spaces across configuration changes, we put them in a ViewModel. In your MainActivity.kt, outside your MainActivity class, add:

class MediaViewModel(app: Application): AndroidViewModel(app) {
    var fullImageUri by mutableStateOf<Uri?>(null)
    var postImageUri by mutableStateOf<Uri?>(null)    
    var croppedImageUri by mutableStateOf<Uri?>(null)
    var recordedVideoUri by mutableStateOf<Uri?>(null)
    var playbackVideoUri by mutableStateOf<Uri?>(null)

    val app = app

    fun reset() {
        fullImageUri?.let { app.contentResolver.delete(it, null, null) }
        fullImageUri = null
        postImageUri = null
        croppedImageUri?.let { app.contentResolver.delete(it, null, null) }
        croppedImageUri = null
        recordedVideoUri?.let { app.contentResolver.delete(it, null, null) }
        recordedVideoUri = null
        // playbackVideoUri simply switches between recordedVideoUri
        // and Uri returned by GetContent() when picking video, which
        // is read only to us.
        playbackVideoUri = null
    }
}
URI

URI stands for Uniform Resource Identifier and is a standard, hierarchical way to name things on the Internet as defined in RFC2396. It is different from URL in that it doesn’t necessarily tell you how to locate the thing.

PostView

We use the ViewModel only within the scope of the navigation destination that launches PostView. Add the following viewModel variable to your PostView composable:

    val viewModel: MediaViewModel = viewModel()

To work with media, we start by asking user’s permission to access camera, mic, and the gallery. Add the following code inside your PostView() composable, after the declarations of local variables:

    var isLaunching by rememberSaveable { mutableStateOf(true) }

    val getPermissions = rememberLauncherForActivityResult(RequestMultiplePermissions()) { results ->
        results.forEach {
            if (!it.value) {
                context.toast("${it.key} access denied")
                navController.popBackStack()
            }
        }
    }

    LaunchedEffect(Unit) {
        if (isLaunching) {
            isLaunching = false

            getPermissions.launch(arrayOf(
                Manifest.permission.CAMERA,
                Manifest.permission.RECORD_AUDIO,
                Manifest.permission.READ_MEDIA_IMAGES,
                Manifest.permission.READ_MEDIA_VIDEO,
                ))
        }
    }

To record a video, or take a photo, or pick an image or video from the gallery, we need to implement three similar components in all cases.

  1. create an ActivityResultContract launcher for the activity,
  2. add a button to the bottomBar of PostView’s Scaffold() to launch the launcher, and
  3. add a UI element to preview the result of the activity prior to posting.

Recall that in the audio lab, we used registerForActivityResult() to register ActivityResultContract, which must be done in the onCreate() method of an Activity. Here we use the compose version, rememberLauncherForActivityResult(). The compose version takes care of registering for ActivityResultContract correctly.

Android comes with APIs to TakePicture(), CaptureVideo(), and two alternate APIs for picking media: GetContent() and PickVisualMedia(). All of them have custom ActivityResultContract we can use to invoke each.

Let’s look at how to record video first.

Recording videos

RecordVideo() launcher

Create a new Kotlin file, call it Media.kt, and put the following code in it:

class RecordVideo: ActivityResultContracts.CaptureVideo() {
    override fun createIntent(context: Context, input: Uri): Intent {
        val intent = super.createIntent(context, input)

        // extend CaptureVideo ActivityResultContract to
        // specify video quality and length limit.
        with (intent) {
            putExtra(MediaStore.EXTRA_VIDEO_QUALITY, 1) // 0 for low quality, but causes green stripping on emulator
            putExtra(MediaStore.EXTRA_DURATION_LIMIT, 5) // secs, there's a 10 MB upload limit
        }
        return intent
    }
}

As mentioned earlier, Android has a custom ActivityResultContract for CaptureVideo(). We created a RecordVideo() subclass of CaptureVideo() to control the video quality and duration limit of the video. We will be using RecordVideo() instead of CaptureVideo() to launch video recording. You can change the EXTRA_DURATION_LIMIT and EXTRA_VIDEO_QUALITY to different values. However, be mindful that all three versions of our back-end server limit client upload size to 10 MB. Three seconds of video captured at resolution of 1960x1080 results in 3 MB of data.

On the emulator, when video recording is done, sometimes the emulator complains that Camera keeps stopping. This is ok, just click Close app and carry on.

In your PostView composable, before the call to Scaffold() add:

    var vLoad by remember { mutableStateOf(true) }
    val forVideoResult =
        rememberLauncherForActivityResult(RecordVideo()) {
            if (hasVideo) {
                viewModel.playbackVideoUri = viewModel.recordedVideoUri
                vLoad = !vLoad
            } else {
                Log.d("RecordVideo", "cancelled or failed")
            }
        }

In the above, we created and registered a ActivityResultContract of type RecordVideo(). We have also created a launcher for the contract and remembered it in the forVideoResult variable. When the contract is completed, we store the uri holding the recorded video in viewModel.playbackVideoUri. The variable vLoad is how we force VideoPlayer() to reload even when a video’s (re-used) uri hasn’t changed, but its content has.

RecordVideoButton()

As with the previous audio lab, to allow user to initiate video recording, we show a video button at the bottom of the PostView screen. Add a bottomBar argument to the Scaffold() of PostView, right below the topBar argument:

        bottomBar = {
            Row(
                modifier = Modifier
                    .fillMaxWidth(1f)
                    // https://stackoverflow.com/a/72638537 for Android 15: enableEdgeToEdge()
                    .padding(WindowInsets.navigationBars.asPaddingValues())
                    .background(color = WhiteSmoke),
                horizontalArrangement = Arrangement.SpaceEvenly,
                verticalAlignment = Alignment.CenterVertically
            ) {
                RecordVideoButton()
            }
        }

and add the following RecordVideoButton() composable inside your PostView composable, after the definition of the forVideoResult variable above, but before the call to Scaffold:

    @Composable
    fun RecordVideoButton() {
        IconButton(
            onClick = {
                checkCamera()
                if (viewModel.recordedVideoUri == null) {
                    viewModel.recordedVideoUri = mediaStoreAlloc(viewModel.app.applicationContext, "video/mp4")
                }
                viewModel.recordedVideoUri?.let { forVideoResult.launch(it) }
            },
        ) {
            Icon(imageVector = ImageVector.vectorResource(R.drawable.outline_videocam_24),
                contentDescription = stringResource(R.string.video),
                modifier = Modifier.scale(1.4f),
                tint = viewModel.playbackVideoUri?.let { Firebrick } ?: Moss
            )
        }
    }

When the user clicks the video button, we first check whether the device has a camera. If so, we create some scratch space in MediaStore to put the recorded video. We store the uri of this space in viewModel.recordedVideoUri. Then we launch the forVideoResult contract launcher created earlier.

Put the checkCamera() function in your PostView, before its use in RecordVideoButton():

    fun checkCamera(){
        if (!viewModel.app.applicationContext.packageManager.hasSystemFeature(PackageManager.FEATURE_CAMERA_ANY)) {
            context.toast("Device has no camera!")
            navController.popBackStack()
        }
    }

and put the mediaStoreAlloc() function in your Media.kt file:

fun mediaStoreAlloc(context: Context, mediaType: String): Uri? {
    return context.contentResolver.insert(
        if (mediaType.contains("video"))
            MediaStore.Video.Media.EXTERNAL_CONTENT_URI
        else
            MediaStore.Images.Media.EXTERNAL_CONTENT_URI,
        ContentValues().apply {
            put(MediaStore.MediaColumns.MIME_TYPE, mediaType)
            put(MediaStore.MediaColumns.RELATIVE_PATH, Environment.DIRECTORY_PICTURES)
        })
}

Previewing videos

To preview the recorded video before we submit it with our chatt, add a VideoPlayer() below the TextField() where you enter the chatt’s message, inside the Column() of your Scaffold() content. We will define VideoPlayer() in the next section. Since we want to preview both the image and video we want to submit with the chatt, put the VideoPlayer() inside a Row():

            Row(horizontalArrangement = Arrangement.SpaceBetween, modifier=Modifier.fillMaxWidth(1f)) {
                viewModel.playbackVideoUri?.let { uri ->
                    VideoPlayer(modifier= Modifier
                        .height(181.dp)
                        .aspectRatio(.6f, matchHeightConstraintsFirst = true), 
                        uri, vLoad, autoPlay = true)
                }
            }
Exoplayer

We use Google’s Media3 Exoplayer to play back video.

@androidx.annotation.OptIn(androidx.media3.common.util.UnstableApi::class)
@Composable
fun VideoPlayer(modifier: Modifier = Modifier, videoUri: Uri, reload: Boolean = true,
                autoPlay: Boolean = false) {
    val context = LocalContext.current
    val lifecycle = LocalLifecycleOwner.current.lifecycle

    var showPause by rememberSaveable { mutableStateOf(true) }
    val videoPlayer = remember { ExoPlayer.Builder(context).build() }
    var playbackPoint by rememberSaveable { mutableStateOf(0L) }

    // reset the videoPlayer whenever videoUri and/or reload change
    LaunchedEffect(videoUri, reload) {
        playbackPoint = 0L
        with (videoPlayer) {
            setMediaItem(fromUri(videoUri))
            playWhenReady = autoPlay
            seekTo(currentMediaItemIndex, playbackPoint)
            prepare()
        }
    }

    Box(modifier = modifier) {
        AndroidExternalSurface(
            modifier = modifier,
            onInit = {
                onSurface { surface, _, _ ->
                    videoPlayer.setVideoSurface(surface)
                    surface.onDestroyed { videoPlayer.setVideoSurface(null) }
                }
            }
        )
        IconButton(modifier = modifier,
            onClick = {
                with (videoPlayer) {
                    if (isPlaying) {
                        playbackPoint = 0L.coerceAtLeast(contentPosition)
                        pause()
                    } else {
                        if (playbackState == Player.STATE_ENDED) {
                            seekTo(currentMediaItemIndex, 0L)
                        }
                        play()
                    }
                }
            }
        ) {
            Icon(imageVector = ImageVector.vectorResource(
                if (showPause) {
                    R.drawable.baseline_pause_24
                } else {
                    R.drawable.baseline_play_arrow_24
                }),
                contentDescription = null,
                modifier = Modifier.scale(2f),
                tint = WhiteSmoke
            )
        }
    }

    DisposableEffect(Unit) {
        val observer = LifecycleEventObserver { _, event ->
            when (event) {
                Lifecycle.Event.ON_RESUME -> {
                    if (autoPlay) {
                        videoPlayer.play()
                    }
                }
                Lifecycle.Event.ON_PAUSE -> {
                    playbackPoint = 0L.coerceAtLeast(videoPlayer.contentPosition)
                    videoPlayer.pause()
                }
                else -> {}
            }
        }
        lifecycle.addObserver(observer)

        // Exoplayer event listener
        val listener = object : Player.Listener {
            override fun onIsPlayingChanged(isPlaying: Boolean) {
                showPause = isPlaying
            }
        }
        videoPlayer.addListener(listener)

        onDispose {
            videoPlayer.removeListener(listener)
            videoPlayer.release()
            lifecycle.removeObserver(observer)
        }
    }
}

ExoPlayer.Builder() creates an instance of the ExoPlayer, which we put inside remember so that it is created only once at VideoPlayer launch, and not on recomposition nor orientation changes. We keep this remembered instance of ExoPlayer in videoPlayer.

VideoPlayer() takes parameters videoUri to play back and reload to indicate whether it should reload the Exoplayer. Reloading the Exoplayer is a side effect, so we put the code for reloading inside a LaunchedEffect(). However, instead of running the LaunchedEffect() only once, upon first launch, we want it run everytime videoUri or reload changes, hence we pass these as the keys/arguments to LaunchedEffect(). Inside LaunchedEffect(), setMediaItem() updates the Exoplayer with the current videoUri.

Previously we have used Scaffold() to lay out UI elements in a composable. Here we use AndroidExternalSurface() which “provides a dedicated drawing Surface as a separate layer positioned by default behind the window holding the AndroidExternalSurface composable. The Surface provided can be used to present content that’s external to Compose, such as a video stream (from a camera or a media player), OpenGL, Vulkan…The provided Surface can be rendered into using a thread different from the main thread.”

Finally, the DispossableEffect() block allows the video player to pause, play, and be disposed of on the app’s appropriate lifecycle events.

Great! You should now be able to record video and preview it on your PostView!

Picking a photo or video clip

GetContent() launcher

Android has two alternatives for picking media items: GetContent() or PickVisualMedia. GetContent() allows you to pick all your media from both your Google Drive and your local device’s Photos album. PickVisualMedia() on the other hand only allows you to pick from your local Photos album and only recent photos and videos. As Google puts it, you can pick only media “user has selected.” PickVisualMedia() does have a nicer, more “modern” UI. For this lab, we show only how to work with GetContent() though the launcher for both APIs are identical except for the ActivityResultContract to launch.

As with recording video, we first add a launcher for ActivityResultContract of type GetContent() in your PostView composable:

    val forContentResult = rememberLauncherForActivityResult(GetContent()) { uri ->
        uri?.let {
            if (viewModel.app.contentResolver.getType(uri).toString().contains("video")) {
                viewModel.playbackVideoUri = uri
            } else {
                checkFullImageUri()
                // cropper cannot work with original Uri, must copy
                viewModel.fullImageUri?.let { uri.copyTo(viewModel.app.contentResolver, it) }
                cropIntent?.let { forCropResult.launch(it) }
            }
        }
    }

If the picked result is a video, we simply copy its uri to viewModel.playbackVideoUri. If the picked result is an image, on the other hand, we allow user to crop the image before posting. First we call checkFullImageUri(), which check that we have allocated some scratch space to hold our full image or allocate it otherwise. It also initializes the image cropper to point to this scratch space if it is newly allocated. Put checkFullImageUri() in your PostView composable before its use in forContentResult:

    fun checkFullImageUri() {
        if (viewModel.fullImageUri == null) {
            viewModel.fullImageUri = mediaStoreAlloc(viewModel.app.applicationContext, "image/jpeg")
            cropIntent?.data = viewModel.fullImageUri
        }
    }

Once the scratch space for the full image is allocated and the cropper initialized, we copy the picked image to this scratch space. Due to Android storage policy, the cropper cannot crop directly on picked image, instead we must first copy it to the scratch space.

We add the copyTo() method as an extension to the Uri class. Add the following code to your Extensions.kt:

fun Uri.copyTo(resolver: ContentResolver, target: Uri): Unit {
    val inStream = resolver.openInputStream(this) ?: return
    val outStream = resolver.openOutputStream(target) ?: return
    val buffer = ByteArray(8192)
    var read: Int
    while (inStream.read(buffer).also { read = it } != -1) {
        outStream.write(buffer, 0, read)
    }
    outStream.flush()
    outStream.close()
    inStream.close()
}
Cropping a photo

When user takes a photo or pick an image, we allow them to crop it before posting with their chatt. We rely on Android’s undocumented cropping capability to perform the cropping function. To subscribe to this external capability, we first create a CropIntent(). Put CropIntent() in your Media.kt file:

fun CropIntent(context: Context, croppedImageUri: Uri?): Intent? {
    // Is there any registered Activity on device to do image cropping?
    val intent = Intent("com.android.camera.action.CROP")
    intent.type = "image/*"
    val listofCroppers =
        context.packageManager.queryIntentActivities(intent, PackageManager.ResolveInfoFlags.of(0L))

    // No image cropping Activity registered
    if (listofCroppers.size == 0) {
        context.toast("Device does not support image cropping")
        return null
    }

    intent.component = ComponentName(
        listofCroppers[0].activityInfo.packageName,
        listofCroppers[0].activityInfo.name)

    // create a crop box:
    intent.putExtra("outputX", 414.36)
        .putExtra("outputY", 500)
        .putExtra("aspectX", 1)
        .putExtra("aspectY", 1)
        // enable zoom and crop
        .putExtra("scale", true)
        .putExtra("crop", true)

    croppedImageUri?.let {
        // cropper puts cropped image in the provided
        // (MediaStore) space, identified by the uri,
        // and returns the same uri
        intent.putExtra(MediaStore.EXTRA_OUTPUT, it)
    } ?: run {
        // cropper allocates new space to put cropped
        // image and returns the uri of the new space        
        intent.putExtra("return-data", true)
    }

    return intent
}

This function first searches for availability of external, on-device Activity capable of cropping. If such an Activity exists, it creates an explicit intent to redirect the user to the image cropper, pre-setting the intent to include our desired cropping features. If CropIntent() is given an uri, it tells the cropper to put the cropped image in the provided uri. Otherwise, the cropper returns the cropped image in a newly allocated space.

Back inside your PostView composable, put the following before their use in forContentResult:

    val cropIntent = remember {
        if (viewModel.croppedImageUri == null) {
            viewModel.croppedImageUri = mediaStoreAlloc(viewModel.app.applicationContext, "image/jpeg")
        }
        CropIntent(viewModel.app.applicationContext, viewModel.croppedImageUri)
    }

    var iLoad by remember { mutableStateOf(true) }
    val forCropResult = rememberLauncherForActivityResult(StartActivityForResult()) { result ->
        if (result.resultCode == Activity.RESULT_OK) {
            result.data?.data.let {
                viewModel.postImageUri = it
            }
        } else {
            // post uncropped image
            viewModel.postImageUri = viewModel.fullImageUri
            Log.d("Crop", result.resultCode.toString())
        }
        iLoad = !iLoad
    }

There is no custom ActivityResultContract for image cropping. Instead, we will launch a generic StartActivityForResult to perform the cropping Activity. To that end, we first create a cropIntent variable to store an instance of CropIntent() and remember it. In instantiating CropIntent(), we pass it a uri to hold cropped images, which we create using mediaStoreAlloc(). We store the uri of this scratch space in viewModel.croppedImageUri for re-use and clean up.

Next we create a launcher for the ActivityResultContract of the generic StartActivityForResult and remember it in forCropResult. If the cropping is successful, we store the result in viewModel.croppedImageUri. If we have provided CropIntent() with scratch space for cropped images, as we did above, storing the result in viewModel.croppedImageUri again will simply be overwriting it with itself. Toggling iLoad ensures that even if we are re-using the same uri for cropped images, we can force AsyncImagePainter below to load the uri.

PickMediaButton()

Now that we have defined the cropping function, we finish up the process of allowing user to pick from the gallery. To allow user to initiate picking image or video from the gallery, we want to show an album button at the bottom of the PostView screen, next to the video button. Add a call to PickMediaButton() in the bottomBar argument to the Scaffold() of PostView, right below your call to RecordVideoButton().

Then add the following PickMediaButton() composable inside your PostView composable, after the definition of the forContentResult variable above, but before the call to Scaffold:

    @Composable
    fun PickMediaButton() {
        IconButton(
            onClick = {
                forContentResult.launch("*/*")
            },
        ) {
            Icon(
                imageVector = ImageVector.vectorResource(R.drawable.outline_perm_media_24),
                contentDescription = stringResource(R.string.album),
                tint = Moss
            )
        }
    }

:point_right:When you launch GetContent(), you may be presented with a list of files under Recent files. DO NOT pick from this list, GetContent() cannot retrieve from Recent files. Instead:

Previewing photos

To preview picked video, the VideoPlayer() you added previously should work unchanged. To preview picked photo, add the following below the VideoPlayer(), inside the same Row() block:

                viewModel.postedImageUri?.let { uri ->
                    AsyncImage(
                        model = ImageRequest.Builder(context)
                            .data(uri)
                            .setParameter("reload", iLoad)
                            .build(),
                        contentDescription = "Photo to be posted",
                        contentScale = Fit,
                        modifier = Modifier.height(181.dp),
                    )
                }

Right before you leave PostView, add the following to clear the MediaViewModel if the user uses back gesture:

    BackHandler(true) {
        viewModel.reset()
        navController.popBackStack()
    }

We can now test image and video picking in addition to video recording! Make sure that when you tap the album button you are able to choose an image from your photo gallery, zoom and crop, and then view it, in addition to picking and viewing video from the gallery.

Taking photos

Unlike picking media from the gallery, Android’s camera API doesn’t allow user to perform an image or video capture with one call, instead we need to launch two different ActivityResultContract from two different buttons. We need the same three components to take a picture and to show it to the user:

  1. launcher for ActivityResultContract of type TakePicture() to take picture,
  2. a camera button in the bottomBar of PostView’s Scaffold() to launch the launcher, and
  3. a UI element to preview the taken picture.

The AsyncImage() you added previously should work unchanged to preview pictures taken from the camera. So we already have the third component. As for the ActivityResultContract launcher, add the following code to your PostView() composable after your definition of forCropResult. We will be using the TakePicture() contract:

    val forPictureResult = rememberLauncherForActivityResult(TakePicture()) { hasPhoto ->
            if (hasPhoto) {
                // viewModel.croppedImageUri = vieModel.takenImageUri // if cancel crop should also cancel take
                cropIntent?.let { forCropResult.launch(it) }
            } else {
                Log.d("TakePicture", "cancelled or failed")
            }
    }

To allow user to initiate taking photo, we want to show a camera button at the bottom of the PostView screen, between the video and album buttons previously defined. Add a call to TakePictureButton() in the bottomBar argument to the Scaffold() of PostView, right below your call to RecordVideoButton().

TODO 1/2: Provide a definition of TakePictureButton() composable inside your PostView composable, after the definition of the forPictureResult variable above, but before the call to Scaffold. When the camera button is clicked, we need to do the following:

  1. check that the device has a camera,
  2. check that viewModel.fullImageUri has been allocated and the cropper has been initialized with it, then
  3. launch the forPictureResult launcher with viewModel.fullImageUri to store the photo taken.

You can use R.drawable.outline_camera_rear_24 and R.string.camera in creating the icon for the button. The result of taking picture will be stored in viewModel.postImageUri. The color of the camera icon should be determined by whether this uri is null.

You should now be able to test all three buttons: to record video, to take photo, and to pick from album, with the results shown on your PostView screen.

TODO 2/2: As in the audio lab, add a modifier argument to your Scaffold(), to allow user to dismiss the virtual keyboard to reveal the bottomBar after editing the message field.

You can’t submit the chatt yet, we’ll work on that next.

Chatt

In Chatt.kt, append two new members to the end of the Chatt class to hold the image and video URLs:

class Chatt(var username: String? = null,
            var message: String? = null,
            var id: UUID? = null,
            var timestamp: String? = null,
            var altRow: Boolean = true,
            imageUrl: String? = null,
            videoUrl: String? = null) {
    var imageUrl: String? by NullifiedEmpty(imageUrl)
    var videoUrl: String? by NullifiedEmpty(videoUrl)
}

Both imageUrl and videoUrl use the same NullifiedEmpty property wrapper we used in the audio lab to guard against various forms of empty URL. Copy the property wrapper from the audio to Chatt.kt.

ChattStore

We will use OkHttp3, a third-party SDK, to upload image and video using multipart/form-data representation/encoding.

A web page with a form to fill out usually has mutiple fields (e.g., name, address, net worth, etc.), each comprising a separate part of the multi-part form. Data from these multiple parts of the form is encoded using HTTP’s multipart/form-data representation. One advantage of using multipart/form-data encoding, instead of JSON for example, is that binary data can be sent as is, not encoded into a string of printable characters. Since we don’t have to encode the binary data into character string, we can also stream directly from file to network without having to first load the whole file into memory, allowing us to send much larger files. We use the multipart/form-data encoding instead of JSON to send images and videos in this lab.

To upload multipart/form-data without OkHttp3, using lower-level networking API, you will need more detailed knowledge of the HTTP protocol.

Add an OkHttp3 client to your ChattStore object and replace your postChatt() with:

    private val client = OkHttpClient()

    suspend fun postChatt(chatt: Chatt, imageFile: File?, videoFile: File?): ByteArray? {
        val mpFD = MultipartBody.Builder().setType(MultipartBody.FORM)
            .addFormDataPart("username", chatt.username ?: "")
            .addFormDataPart("message", chatt.message ?: "")

        imageFile?.let {
            mpFD.addFormDataPart("image", "chattImage",
                it.asRequestBody("image/jpeg".toMediaType()))
        }

        videoFile?.let {
            mpFD.addFormDataPart("video", "chattVideo",
                it.asRequestBody("video/mp4".toMediaType()))
        }

        val request = okhttp3.Request.Builder()
            .url(serverUrl+"postimages/")
            .post(mpFD.build())
            .build()

        return try {
            val response = client.newCall(request).suspendExec()

            if (response.isSuccessful) {
                response.body.bytes()
            } else {
                response.body.close()
                null
            }
        } catch (e: IOException) {
            Log.e("postChatt", e.localizedMessage ?: "Posting failed")
            null
        }
    }

By declaring OkHttpClient a property of our ChattStore object, we create only a single instance of the client, as recommended by OkHttp3 documentation, to improve performance.

The method postChatt() constructs the “form” to be uploaded as comprising:

  1. a part named “username” whose field contains the username (or the empty string if null),
  2. a part named “message” constructed similarly, then comes
  3. an optional part named “image” with data in the file imageFile. The image has been JPEG encoded. The string “chattImage” is how the data is tagged, it can be any string. The MediaType() documents the encoding of the data (though it doesn’t seem to be used for anything), finally,
  4. the last part is also optional and named “video”. It is handled similarly to the “image” part. If the File provided is on storage, the data is transferred directly from storage to network without loading it into memory first.

OkHttp3 does not have a suspending version of enqueued request for JVM-based systems. User must provide a callback function that OkHttp3 calls upon completion of upload to report the response to the user. We have added an suspendExec() that converts the enqueue() function into a function that suspends until one of the provided callback function is called, converting it practically into a suspending execute function. Add the following extension to OkHttp3’s Call to your ChattStore, before its use in postChatt():

    private suspend fun Call.suspendExec() = suspendCoroutine { cont ->
        enqueue(object : Callback {
            override fun onResponse(call: Call, response: Response) {
                cont.resume(response)
            }
            override fun onFailure(call: Call, e: IOException) {
                cont.resumeWithException(e)
            }
        })
    }

When the suspending suspendExec() returns, if the response is isSuccessful, i.e., HTTP status code in the range [200-300), the body of the response is returned to the function calling postChatt() as a ByteArray. Otherwise, null is returned if there had been an error.

We now convert getChatts() to use OkHttp3. Again we use our suspendExec() to create a suspending getChatts():

    suspend fun getChatts() {
        // only one outstanding retrieval
        synchronized(this) {
            if (isRetrieving) {
                return
            }
            isRetrieving = true
        }
        
        val request = okhttp3.Request.Builder()
            .url(serverUrl+"getimages/")
            .build()

        try {
            val response = client.newCall(request).suspendExec()

            // http status code is in [200-300)
            // https://square.github.io/okhttp/3.x/okhttp/okhttp3/Response.html#isSuccessful--
            if (response.isSuccessful) {
                // ResponseBody must be .close()d, or done automatically by .bytes()
                // https://square.github.io/okhttp/3.x/okhttp/okhttp3/ResponseBody.html
                val chattsReceived = try {
                    JSONArray(response.body.string())
                } catch (e: JSONException) {
                    JSONArray()
                }

                var idx = 0
                var _chatts = mutableListOf<Chatt>()
                for (i in 0 until chattsReceived.length()) {
                    val chattEntry = chattsReceived[i] as JSONArray
                    if (chattEntry.length() == nFields) {
                        _chatts.add(
                            Chatt(
                                username = chattEntry[0].toString(),
                                message = chattEntry[1].toString(),
                                id = UUID.fromString(chattEntry[2].toString()),
                                timestamp = chattEntry[3].toString(),
                                altRow = idx % 2 == 0,
                                imageUrl = chattEntry[4].toString(),
                                videoUrl = chattEntry[5].toString(),
                            )
                        )
                        idx += 1
                    } else {
                        Log.e("getChatts",
                            "Received unexpected number of fields " + chattEntry.length()
                                .toString() + " instead of " + nFields.toString()
                        )
                    }
                }
                chatts = _chatts
            } else {
                Log.e("getChatts", "NETWORKING ERROR (${response.isRedirect})")
            }
            response.body.close()
        } catch (e: IOException) {
            Log.e("getChatts", e.localizedMessage ?: "Failed GET request")
        }
        synchronized(this) {
            isRetrieving = false
        }
    }

As with the Signin lab, we no longer need to create a Volley’s RequestQueue. You can remove the queue property and initQueue() function from your ChattStore object. Remove the following lines:

    private lateinit var queue: RequestQueue

    fun initQueue(context: Context) {
        queue = newRequestQueue(context)
    }

Remove the call to initQueue() in MainActivity.onCreate() and replace the subsequent call to getChatts() with:

    lifecyleScope.launch {
        getChatts()
    }

Finally, remove the Volley dependency from your app build file. Remove:

    implementation("com.android.volley:volley:1.2.1")

PostView

Since postChatt() is now a suspending function, modify your SubmitButton() to launch postChatt() inside your viewModel’s CoroutineScope. Replace the whole onClick block of the IconButton() in SubmitButton() with:

            isEnabled = false
            viewModel.viewModelScope.launch {
                var iFile: File? = null
                var vFile: File? = null

                viewModel.postImageUri?.run {
                    toFile(viewModel.app.applicationContext)?.let {
                        iFile = it
                    } ?: context.toast("Unsupported image format")
                }

                viewModel.playbackVideoUri?.run {
                    toFile(viewModel.app.applicationContext)?.let {
                        vFile = it
                    } ?: context.toast("Unsupported video format")
                }

                postChatt(Chatt(username, message), iFile, vFile)?.let {                
                    getChatts()
                }
                viewModel.reset()
                withContext(Dispatchers.Main) {
                    navController.popBackStack()
                }
            }

To upload data directly from MediaStore, given its URI, we add a toFile() method as an extension to the Uri class. Add the following extension method to your Extensions.kt file:

fun Uri.toFile(context: Context): File? {
    if (!(authority == "media" || authority == "com.google.android.apps.photos.contentprovider")) {
        // for on-device media files only
        context.toast("Media file not on device")
        Log.d("Uri.toFile", authority.toString())
        return null
    }

    var file: File? = null
    if (scheme.equals("content")) {
        val cursor = context.contentResolver.query(
            this, arrayOf("_data"),
            null, null, null
        )

        cursor?.run {
            moveToFirst()
            val col = getColumnIndex("_data")
            if (col != -1) {
                val path = getString(col)
                if (path != null) {
                    file = File(path)
                }
            }
            close()
        }
    }
    return file
}

Depending on your upload bandwidth, uploading video can take a long time.

With the updated PostView(), you can now take or pick image and/or video and send them to your Chatter back end! Since we haven’t worked on image/video download, you can verify this by inspecting the content of your chatts table in the postgres database at the backend.

ChattListRow

To display the video and image associated with a posted chatt, add the following below the display of the chatt’s message inside your Column.

        LazyRow(horizontalArrangement = Arrangement.SpaceBetween, modifier=Modifier.fillMaxWidth(1f)) {
            chatt.videoUrl?.let {
                item { VideoPlayer(modifier = Modifier.height(181.dp).aspectRatio(.6f, matchHeightConstraintsFirst = true).padding(4.dp, 0.dp, 4.dp, 10.dp),
                    it.toUri()) }
            }
            chatt.imageUrl?.let {
                item {
                    SubcomposeAsyncImage(it,
                        contentDescription = "Photo posted with chatt",
                        loading = { CircularProgressIndicator() },
                        contentScale = Fit, 
                        modifier = Modifier.height(181.dp).padding(4.dp, 0.dp, 4.dp, 10.dp)
                    )
                }
            }
        }

If a given chatt comes with a video URL, the video player will be shown, and when clicked, it will play back the video using VideoPlayer(). If the chatt has an image URL, the image will be downloaded asynchronously using SubcomposeAsyncImage(), which is like AsyncImage() but allows us to specify showing the CircularProgressIndicator() when the image is still loading. We use LazyRow() here because if a chatt has neither a video nor an image, LazyRow() will not take up any screen space.

Congratulations, you’ve successfully added the ability to access your device’s gallery or camera, upload/download images and videos to/from your server, and display images and play back videos in your app!

There is no special instructions to run this lab on the Android emulator.

Submission guidelines

We will only grade files committed to the master or main branch. If you use multiple branches, please merge them all to the master/main branch for submission.

Ensure that you have completed the back-end part and have pushed your changes to your back-end code to your 441 GitHub repo.

Push your images lab folder to your GitHub repo as set up at the start of this spec.

git push

:point_right: Go to the GitHub website to confirm that your front-end files have been uploaded to your GitHub repo under the folder images. Confirm that your repo has a folder structure outline similar to the following. If your folder structure is not as outlined, our script will not pick up your submission, you will get ZERO point, and you will further have problems getting started on latter labs. There could be other files or folders in your local folder not listed below, don’t delete them. As long as you have installed the course .gitignore as per the instructions in Preparing GitHub for EECS 441 Labs, only files needed for grading will be pushed to GitHub.

  441
    |-- # files and folders from other labs . . .
    |-- images
        |-- composeChatter
            |-- app
            |-- gradle   
    |-- # files and folders from other labs . . .

Verify that your Git repo is set up correctly: on your laptop, grab a new clone of your repo and build and run your submission to make sure that it works. You will get ZERO point if your lab doesn’t open, build, or run.

IMPORTANT: If you work in a team, put your team mate’s name and uniqname in your repo’s README.md (click the pencil icon at the upper right corner of the README.md box on your git repo) so that we’d know. Otherwise, we could mistakenly think that you were cheating and accidentally report you to the Honor Council, which would be a hassle to undo. You don’t need a README.md if you work by yourself.

Review your information on the Lab Links sheet. If you’ve changed your teaming arrangement from previous lab’s, please update your entry. If you’re using a different GitHub repo from previous lab’s, invite eecs441staff@umich.edu to your new GitHub repo and update your entry.

References

Exoplayer and AndroidView

Android Camera

Image download

Image cropping

Not updated to Android 11:

Image upload

OkHttp3

MediaStore and scoped storage

Misc. topics

Appendix: imports


Prepared for EECS 441 by Benjamin Brengman, Wendan Jiang, Alexander Wu, Ollie Elmgren, Tianyi Zhao, Nowrin Mohamed, Yibo Pi, and Sugih Jamin Last updated: December 9th, 2024