Dr. David HartCansever, Ayberk2025-06-052025-06-052025-05May 2025May 2025http://hdl.handle.net/10342/14029Style transfer aims to render the content of an image in the style of another, but applying this technique to specific segments within an image poses significant challenges, particularly in achieving seamless integration between styled and non-styled regions. In this thesis, we explore potential improvements to segmented style transfer by introducing blended partial convolution into the processing pipeline. Specifically, we evaluate three techniques: replacing traditional style transfer mechanisms with partial convolution, incorporating mask dilation in partial convolution, and applying mask feathering both prior to encoding and within the decoder. Systematically assessing these methods identifies their contributions to enhancing the style adaptation within designated segments, reducing boundary artifacts, and improving overall visual coherence. Preliminary results indicate that these techniques collectively have the potential to offer a more refined tool for applications in digital art, augmented reality, and image editing. This work advances the field of style transfer by addressing key limitations in segmented applications and provides a foundation for future research in localized style adaptation.application/pdfEnglishComputer EngineeringComputer ScienceIMPROVING SEGMENTED STYLE TRANSFER VIA BLENDED PARTIAL CONVOLUTIONMaster's Thesis2025-05-22