DYNAMIC VISUAL COMMUNICATION PICTURE FRAMING IN VIRTUAL REALITY GRAPHIC DESIGN USING FUZZY LOGIC

Heng Wu and He Feng

References

  1. [1], [8], [18]Noise reduction Bilateral filtering, non-local Means Remove artifacts and noise [1], [6], [18]Resolution adjustment Bicubic interpolation, Super-resolution Optimise image quality for VR display [1], [6], [18]3Figure 1. System architecture for PFDN-VGDM.in a higher-quality VR experience with more complex anddynamic visual aspects.3.1.1 Dynamic SegmentationK-means uses this clustering approach so that the studymay divide the picture into separate parts. Its goal is toreduce the total squared deviation between each pixel andthe centre of its respective cluster to a minimummin{Si}ki=1ki=1 x∈Six−µi2(1)The K-means clustering technique, denoted byequation 1, seeks to minimise the sum of squared distancesbetween each point x and its corresponding centroid. µi . Inthis context, Si represents the i-th segment, and µi is thecentroid of the segment Si . The goal is to identify the bestsegmentation that reduces this total. Using a topographicsurface, where bright pixels signify high elevations, theWatershed algorithm considers the grayscale image as asurface. If two objects overlap, it will be easier to separatethem. Segmentation in PFDN-VGDM is best handled bythe K-means method because of its strength in effectivelyclustering picture pixels based on feature similarity. Thisis essential for separating VR images into meaningfulsegments. It works well in real-time VR settings because toits simplicity, scalability, and ability to minimise the sumof squared distances between cluster centroids and pixels.The study’s objective of improving visual communicationin VR through accurate and adaptive segmentation alignswith K-mean’s computational efficiency and adaptabilityto changing virtual settings.3.1.2 Real-Time Noise ReductionBy averaging pixel values with weighted Gaussiandistributions dependent on spatial and intensity disparities,the Bilateral filtering filter can smooth out images while4Table 3Feature Extraction MethodsFeature Method Output ReferencesShape Contour analysis, Hough transform Geometric primitives, object boundaries [1],
  2. [2]Colour Histogram analysis, K-means clustering Colour palette, dominant colours
  3. [3],
  4. [4]Style Convolutional neural networks Artistic style classification
  5. [6], [18]Resolution adjustment Bicubic interpolation, Super-resolution Optimise image quality for VR display [1], [6], [18]3Figure 1. System architecture for PFDN-VGDM.in a higher-quality VR experience with more complex anddynamic visual aspects.3.1.1 Dynamic SegmentationK-means uses this clustering approach so that the studymay divide the picture into separate parts. Its goal is toreduce the total squared deviation between each pixel andthe centre of its respective cluster to a minimummin{Si}ki=1ki=1 x∈Six−µi2(1)The K-means clustering technique, denoted byequation 1, seeks to minimise the sum of squared distancesbetween each point x and its corresponding centroid. µi . Inthis context, Si represents the i-th segment, and µi is thecentroid of the segment Si . The goal is to identify the bestsegmentation that reduces this total. Using a topographicsurface, where bright pixels signify high elevations, theWatershed algorithm considers the grayscale image as asurface. If two objects overlap, it will be easier to separatethem. Segmentation in PFDN-VGDM is best handled bythe K-means method because of its strength in effectivelyclustering picture pixels based on feature similarity. Thisis essential for separating VR images into meaningfulsegments. It works well in real-time VR settings because toits simplicity, scalability, and ability to minimise the sumof squared distances between cluster centroids and pixels.The study’s objective of improving visual communicationin VR through accurate and adaptive segmentation alignswith K-mean’s computational efficiency and adaptabilityto changing virtual settings.3.1.2 Real-Time Noise ReductionBy averaging pixel values with weighted Gaussiandistributions dependent on spatial and intensity disparities,the Bilateral filtering filter can smooth out images while4Table 3Feature Extraction MethodsFeature Method Output ReferencesShape Contour analysis, Hough transform Geometric primitives, object boundaries [1], [2]Colour Histogram analysis, K-means clustering Colour palette, dominant colours [3], [4]Style Convolutional neural networks Artistic style classification [5], [6]Spatial Graph-based Analysis Relative positions, hierarchical structure
  6. [8], [18]Noise reduction Bilateral filtering, non-local Means Remove artifacts and noise [1], [6], [18]Resolution adjustment Bicubic interpolation, Super-resolution Optimise image quality for VR display [1], [6], [18]3Figure 1. System architecture for PFDN-VGDM.in a higher-quality VR experience with more complex anddynamic visual aspects.3.1.1 Dynamic SegmentationK-means uses this clustering approach so that the studymay divide the picture into separate parts. Its goal is toreduce the total squared deviation between each pixel andthe centre of its respective cluster to a minimummin{Si}ki=1ki=1 x∈Six−µi2(1)The K-means clustering technique, denoted byequation 1, seeks to minimise the sum of squared distancesbetween each point x and its corresponding centroid. µi . Inthis context, Si represents the i-th segment, and µi is thecentroid of the segment Si . The goal is to identify the bestsegmentation that reduces this total. Using a topographicsurface, where bright pixels signify high elevations, theWatershed algorithm considers the grayscale image as asurface. If two objects overlap, it will be easier to separatethem. Segmentation in PFDN-VGDM is best handled bythe K-means method because of its strength in effectivelyclustering picture pixels based on feature similarity. Thisis essential for separating VR images into meaningfulsegments. It works well in real-time VR settings because toits simplicity, scalability, and ability to minimise the sumof squared distances between cluster centroids and pixels.The study’s objective of improving visual communicationin VR through accurate and adaptive segmentation alignswith K-mean’s computational efficiency and adaptabilityto changing virtual settings.3.1.2 Real-Time Noise ReductionBy averaging pixel values with weighted Gaussiandistributions dependent on spatial and intensity disparities,the Bilateral filtering filter can smooth out images while4Table 3Feature Extraction MethodsFeature Method Output ReferencesShape Contour analysis, Hough transform Geometric primitives, object boundaries [1], [2]Colour Histogram analysis, K-means clustering Colour palette, dominant colours [3], [4]Style Convolutional neural networks Artistic style classification [5], [6]Spatial Graph-based Analysis Relative positions, hierarchical structure [7], [8]maintaining the integrity of their edgesBF[I]p =1Wp q∈sGσs( p − q )Gσr(|Ip − Iq|)Iq (2)Bilateral filtering is represented by (2). At pixel p,the filtered image value is calculated as BF[I]p, which isa weighted sum of the values of surrounding pixels Iq.The spatial Gaussian function Gσsand the range Gaussianfunction Gσrare used to compute the weights, whichdecrease noise while retaining edges.3.1.3 Dynamic Resolution AdjustmentBicubic interpolation is a method that determines thevalue of a new pixel by taking into account the 16pixels that are nearest to it. This method generatessmoother and more precise images than straightforwardinterpolation techniques. Enhanced picture resolutionis achieved by utilising deep learning models in thesuper-resolution process, generating high-quality imagesfrom lower-resolution inputs. Dynamic image processingallows real-time analysis, adaptation, and improvementof visual content, optimising visual communication inVR. It can detect changes, react to human input, andadapt to changing environments in real-time to monitormotion, extract features, and segment images. Advancedalgorithms, such as optical flow for motion analysis, convo-lutional neural networks (CNNs) for feature identification,and probabilistic models for adaptive filtering, ensureconsistent and high-quality visuals regardless of angle,illumination, or input. This constant optimisation makesVR experiences more compelling and user-friendly, clarifiesvisual signals, and allows for design modifications.3.2 Enhanced Feature ExtractionTable 3 shows how graphic designers extract features fromVR photos. Contour analysis and the Hough transformidentify geometric primitives and object boundaries.Histogram analysis and K-means clustering are used todetermine colour palettes and identify dominant hues.CNNs classify creative genres, while graph-based analysisexplains spatial links and hierarchies.3.3 PFDN CreationPFDNs are advanced fuzzy cognitive maps (FCMs)featuring probabilistic modelling, fuzzy logic, and dynamicTable 4Fuzzy Logic OperationsOperation Formula DescriptionAND min(µA (x) , µβ (x)) Intersection of fuzzysetsOR max(µA (x) , µβ (x)) Union of fuzzy setsNOT 1 − µA (x) Complement of fuzzysetnetwork dynamics. Fuzzy control logic represents uncer-tain relationships between variables or concepts in adynamic system, while probabilistic techniques capturestochasticity. Activating nodes and updating edge weightsupdate variable or idea weighted edges. Model andanalyse complex, unexpected, and dynamic PFDN interac-tions. Reducing uncertainty improves forecasting, scenarioanalysis, and prediction. Fuzzy logic produces PFDNCreation’s relational model from features. It weights edges,determines node interactions, and designs nodes. VR designdecisions that address ambiguity and complexity providea flexible, context-aware system that optimises visualcommunication.Table 4 shows how the PFDN represents complex andsubjective idea connections using fuzzy logic operations likeAND, OR, and NOT. These systems describe interactionsin detail, making VR judgments dynamic and context-aware. Fuzzy logic offers partial membership instead ofbinary logic, making notion linkages more adaptable. Thisis needed in complex systems like virtual reality becauserelationships are intricateR (x, y) = {((x, y) , µR (x, y)|x, y) ∈ X × Y } (3)The fuzzy relationship between x and y is described by(3), which measures association. µR(x, y) Next, it providesa formal framework for showing and measuring FCMcomponent connection’s type and intensity. The PFDN forthe virtual graphic design map (PFDM-VGDM) systemcan integrate a fuzzy controller by adding a fuzzy logiccomponent for VR design decisions. User perspective (UP),VR environment parameters (VRE), style recognition score(SR), colour analysis findings (CA), and shape detectionaccuracy (SD) feed this module’s controller. This node isbetween design element and visual quality. fuzzy principlesgenerate design element adjustments. In that order, inputand output fuzzy sets are Low, Medium, High, Minor,5Figure 2. PFDN for PFDN-VGDM.Moderate, and Major. To maximise VR visual quality, thefuzzy controller fuzzifies inputs, applies rules, aggregatesoutputs, and defuzzifies using triangle or trapezoidalmembership functions to generate a crisp DEA valueAi (k + 1) = f Ai (k) +j=1Aj (k) .wji++ FC(UP, VRE, SR, CA, SD) (4)An explanation of the state update method for anode i in the PFDM is given by (4). At time stepk + 1, node i’s state is determined by its current stateand the weighted effect of other nodes. Subsequently, itrecords the system’s dynamic behaviour, revealing howeach concept’s state changes over time in response to inputsfrom different concepts. FC denotes the fuzzy controllerfunction that produces the DEA value. The VR designprocess may make more sophisticated and context-awaredecisions thanks to the fuzzy controller’s integration withthe PFDM. The VR graphic design system can achievemore responsive and adaptive design optimisations becauseit manages the inherent uncertainties in user perceptionand environmental conditions.The PFDN structure is defined as the aspects of designand environmental factors are represented by the nodeslabelled as N = {n1, n2, ..., nk}. As a representation ofprobabilistic fuzzy relationships between nodes n, edgesE = {eij} were created. For every edge eij, there is aweight wij and a probability distribution that is linked withit [pij]. Each node in the network is denoted by the variableni. The variable eij represents the edge that connects node ito node j. The variable wij indicates the weight of the edgethat connects node i to node j. The symbol pij denotesa probability distribution associated with the edge thatconnects node i to node j.The PFDN uses edge weights and probabilitydistributions to depict fuzzy design element-environmentalparameter links and their uncertainty. Nodes in VR designindicate colour palettes, area layouts, and user interactions.Edges show effect strength and probability distributionsshowing variability or uncertainty in how these factorsinteract with weights. Figure 2 shows PFDN, a VR graphicdesign system. Dynamic segmentation, noise reduction,and resolution adjustment are controlled by a map’s startinput node. The steps affect spatial relationship, stylerecognition, colour analysis, and shape detection. Theseassessments feed the design element node, which theuser perspective modifies. Design element-VR environmentinteractions effect VR. VR setup and visual quality affect6Figure 3. Advanced VR Optimisation.Table 5Optimisation CriteriaAspect Criteria MeasurementLayout Balance, symmetry,golden ratiospatial distributionscoreColour Harmony, contrast,accessibilityColour harmonyindexInteraction Ergonomics,intuitivenessUser effortestimationuser experience. User Experience data from performanceoptimisation can feed back to the design element and VRE.This iterative approach improves VR by modifying userfeedback and system performance in real time. Multipleinputs are integrated via ‘+’ nodes. This map shows therelevance of user feedback and steady advances in VRgraphic design system development and optimisation.3.4 Advanced VR OptimisationFigure 3 illustrates the simplicity of PFDN’s VR designoptimisation. The workflow begins with fuzzy cognitivemaps, a cluster of interconnected nodes that drivedecision-making. A design interface icon represents layoutoptimisation using this input. The palette and brushtool optimise colour for aesthetics. Next, several screensoptimise interaction to improve user experience. Finally,these refined elements form a globe with gears and aspeedometer, symbolising a refined, efficient, and globallyrelevant VR experience. This systematic technique hasgreatly improved VR design.In Table 5, the layout criteria used in VR designoptimisation include symmetry, balance, and the goldenratio, quantified by spatial distribution scores. Colourstandards include accessibility, harmony, and contrast,measured using a colour harmony index. The user effortestimation assesses the ergonomics and intuitiveness of theinteraction criteriaB = 1 −| (wi.xi)|wi+| (wi.yi)|wi2(5)Table 6Compression TechniquesData Type Technique Compression RatioGeometry Mesh simplification,quantisation10:1 - 20:1Textures ASTC, ETC2, BC7 4:1 - 8:1Interactions Keyframe reduction,Bezier curves5:1 - 10:1In (5), visual equilibrium is achieved by calculatinglayout balance B by analysing the weighted placementsof pieces. An optimal spatial distribution score for a well-balanced design is generated by dividing the sum of allvisual weights wi by the product of their placements (xi,yi) and then dividing that result by the total weightsH = 1 −|∆E∗ab|n.ma x (∆E∗ab)(6)In (6), the evaluation of colour disparities (∆E∗ab) inCIELAB space is used to calculate colour harmony H.To ensure the colour scheme is visually beautiful, theformula calculates the discordance by adding up all theabsolute colour differences and then normalising themagainst the maximum difference and the number ofsamples n.3.5 Scalable Compression SystemThe scalable compression system optimises VR designby reducing data size without compromising quality,compressing geometry, textures, and interaction data.Table 6 shows that geometry mesh simplification andquantisation, texture formats including ASTC, ETC2,and BC7, interaction keyframe reduction, and Beziercurves produce 4:1 to 20:1 compression ratios. Table 6shows how VR graphics data redundancy, image quality,texture complexity, and encoding affect compressionratio. Low compression ratios for high-resolution imagesand detailed textures increase data volume since theycannot be compressed without quality loss. Effectiveencoding and visual content redundancy boost compressionratios, lowering data size without affecting quality.Increased compression ratios reduce storage and band-width needs, boosting VR data streaming and real-timerendering.3.6 Environment Simulation EngineThe Environment simulation engine maximises perfor-mance for visual fidelity and smooth operation, creatinga realistic virtual world. It calculates complex lighting,shadows, and object interactions at high frame ratesto reduce user disorientation. Virtual realities usethis intricate technique to generate living, breathingworlds.7Algorithm 1 Adaptive VR Design OptimisationInput: VR Image1Output: Optimised VR DesignStep 1: Image Preprocessingprocessed image = preprocess image(VR Image)Step 2: Feature Extractionfeatures = extract features(processed image)Step 3: Initialise PFDNFCM = initialise PFDN()Step 4: Main Optimisation Loop:for iteration = 1 to max iterations doUP = get user perspective()VRE = analyse vr environment()SR, CA, SD = features[2], features[1], features[0]DEA = fuzzy controller(UP, VRE, SR, CA, SD)for i = 1 to num nodes doweighted sum = 0for j = 1 to num nodes doweighted sum += weights[j][i] FCM[j]end forFCM[i] = fuzzy inference(FCM[i] + weighted sum +DEA[i])end forif convergence reached(FCM) thenbreakend ifend forStep 5: Generate Optimised Design:Optimised VR Design = generate design(FCM)Step 6: Return Optimised VR DesignPFDN-VGDM algorithm 1 optimises VR designsby processing and refining input visuals. Imagepreprocessing—adjusting resolution, eliminating noise, andsegmenting—begins. Feature eExtraction then identifiesshape, colour, style, and spatiality. When the iterativeincrease in design efficiency or accuracy falls belowa predetermined threshold, Algorithm 1’s convergencecondition for PFDN-VGDM optimisation is satisfied,meaning that additional modifications do not result insignificant benefits. As soon as an ideal or nearly optimalsolution is found, the optimisation process will end,according to this condition. Improved efficiency metricslike reduced computational load, faster rendering times,and enhanced user satisfaction within VR environments,along with high accuracy in shape detection, colouranalysis, style classification, and interactive performance,indicate an optimised design. The Fuzzy controller thenapplies fuzzy logic principles to these features, consideringthe user’s perspective and VR environment settings tocalculate a design element adjustment (DEA) value. Thesefeatures, along with the DEA value update, refine the fuzzycognitive map (FCM) through its nodes. User feedbackand environmental analysis are used to update the FCMin the Main PFDN-VGDM Algorithm until convergence.The VR design is optimised and ready for execution. Thismethod’s flexible and adaptive design enhances the visualsof VR apps.4. Simulation of PFDM-VGDMDataset StudyThe study utilised the open photographs dataset V7 [24],which comprises a large number of annotated photographs.This dataset consists of numerous objects with richannotations, making it particularly useful for visualunderstanding, object detection, and segmentation. Thecollection labels item bounding boxes, segmentation masks,and object relationships. The paper analyses how thetechnology of fuzzy control-based virtual graphic designmap (PFDM-VGDM) improves VR visual communication.VR headsets Oculus Quest 2 and HTC Vive Pro, high-performance PCs with Intel i9 CPUs and NVIDIA RTX3080 GPUs, and Unity 3D (2021.2) for virtual worlds wereused in the experiment MATLAB R2021b was used forfuzzy logic techniques and image processing. In contrast,Python 3.9 with TensorFlow and PyTorch was used forstyle recognition and super-resolution. The PFDN-VGDMsystem was tested on 1000 logos, brand identities, productpackaging, poster and commercial designs, user interface,and web designs.Comparison StudyCompare the model to others to prove its value.This comparative study uses shape detection accuracy,colour analysis accuracy, style classification accuracy,spatial relationship accuracy, average quality improvement,average user satisfaction, virtual environment performance,and static display performance to evaluate visual com-munication design. These algorithms include VRVC [20],DTVD [19], and MVDA [18].4.1 Average Quality ImprovementAverage quality improvement is the overall qualityimprovement over time. It evaluates average performanceor effectiveness improvement across parameters. Thisenhancement can be quantified by accuracy, dependability,usability, and efficiency. Tracking these gains over timeenables firms to evaluate their processes, products, andservices, making informed decisions to enhance qualityand customer satisfaction. These are the individual qualityparameters that are included in the quality metrics:Efficiency (E), Accuracy (A) Dependability (D) andUsability (U)Average Quality Improvement =1nni=1Qi,new − Qi,baseQi,base(7)In (7), n is the sum of the quality metrics duringthe evaluation periods is divided by the total numberof metrics to get the average quality improvement. Themethod helps with quality and user satisfaction strategyselections by evaluating overall effectiveness enhancementsacross metrics.8Figure 4. Average quality Improvement.In Figure 4, the Average quality improvement for thePFDN-VGDM framework measures accuracy, reliability,usability, and efficiency gains over evaluation periods. At14, DTVD 10, VRVC 13, and PFDN-VGDM20, MVDAis poorly improved. MVDA is 18, DTVD 12, VRVC 18,and PFDN-VGDM25. High MVDA, DTVD, VRVC, andPFDN-VGDM levels are 20, 19, 21, and 35. Measurementsof framework efficacy inform strategic decisions to improvequality and user happiness.4.2 Design Efficiency ImprovementDesign efficiency improvement involves improving designprocesses, workflows, and techniques to boost productivity,quality, and effectiveness. Optimising design resources,integrating modern technologies such as automation andAI, refining workflows, and continually improving designpractices are key components. Reducing time-to-market,mistakes, resource utilisation, and design performance arethe primary objectives to meet project goals and satisfystakeholder expectations. Efficiency Metrics: This includesmetrics such as machine vision-based design analysis(MVDA), digital technology in visual communicationDesign (DTVD), VR for visual communication (VRVC)and fuzzy control logic-based virtual graphic design map(PFDM-VGDM)Efficiency improvement =ni=1 ωiρini=1 ωi(8)where ωi is the weight optimisation criterion, ρi is theperformance improvement from the ith criterion. (8) is thecomputation process involves calculating efficiency metrics(MVDA, DTVD, VRVC, PFDM-VGDM) daily, weekly,and monthly, summarising them for each period, anddividing the sum by the number of metrics to calculate theaverage efficiency improvement.In Fig. 5, the performance enhancements in the designwere examined using the MVDA, DTVD, VRVC, andPFDN-VGDM algorithms. Metrics are reasonable daily,with PFDN-VGDM having the best score at 8. Every week,things get a little better, and PFDN-VGDM reaches 9.With a monthly performance of 9, PFDN-VGDMis is stillleading the pack. These numbers prove that PFDN-VGDMis the best at increasing design efficiency, which bodes wellfor the future of design processes.Figure 5. Design efficiency improvement.4.3 Interactive Environment PerformanceAn Interactive environment performance measures indicatehow interactive systems or environments work in real-time. Subsequently, it evaluates interactive element’sresponsiveness, user experience, reliability, and adaptationin digital or physical environments. The assessmenthelps determine how interactive systems match userexpectations, optimise usability, and ensure seamless userengagement.Interactive Environment Performance(IEP)= i (wi · Pi)i wi× 100% (9)In (9), interactive environment performance, or IEP,the weight of the i-th performance metric is equalto wi.The score of the i-th performance metric isrepresented by Pi. i = {Adaptation, user experience,reliability, and respondents}. When solving (12), theweights (wi) for interactive environment performance(IEP) are calculated according to the significance ofeach performance parameter in the particular interactiveVR setting. Responsiveness, reliability, user experience,and adaptation are common measures in this category.Expert opinion, empirical research, or user input determinethe weights, which represent the relative importance ofeach parameter for a smooth interactive experience. Theweights could change depending on the situation sincevarious virtual reality uses (such training simulations vs.entertainment) could place different values on things likeresponsiveness and user experience. The IEP computationis guaranteed to reflect performance in a variety ofinteractive contexts by adjusting these weights.In Fig. 6, design system’s performance in interactiveVR settings is measured by interactive environmentperformance. With 90% performance, PFDN-VGDM beatsall other systems in basic situations. PFDN-VGDM leads tomoderately complicated situations with 85% performance.PFDN-VGDM improves to 87% in difficult situations. Thisdata shows that PFDN-VGDM is the best solution forinteractive VR designers since it can manage projects9Figure 6. Interactive environment performance.Figure 7. Static display performance.of all levels of complexity. High and consistent systemperformance is maintained throughout complexity levels.4.4 Static Display PerformanceSDPcontext =SPcontextMPcontext× 100 (10)In (10), SDPcontext represent static display perfor-mance for a specific context (2D, 3D, or VR), SPcontextrepresents system performance in the specific context,MPcontext represents maximum possible performance in thespecific context, and context = {2D, 3D, VR}.In Fig. 7, the data compares design systems in 2D, 3D,and VR static display contexts. PFDN-VGDM surpassesall 2D systems with 88% performance, whereas MVDAstruggles the most. PFDN-VGDM leads in 3D with 90%,somewhat better than 2D. PFDN-VGDM performs best inVR at 92%, with VRVC improving. MVDA [18] improvesthe most from 2D to VR but performs worst acrossall categories. VRVC improves VR static displays, whileDTVD performs consistently across all screens. PFDN-VGDM gives designers the versatile and effective option ofworking with static visual elements across platforms.5. ConclusionThe probabilistic fuzzy dynamic network-based virtualgraphic design map greatly improves VR graphics. VRvisual communication issues are addressed by intelligentfeature extraction, fuzzy cognitive mapping, and dynamicpicture processing. Enhanced visual quality, featureanalysis, design efficiency, and VR adaptation are thekey benefits. Complex visual data designers benefit fromPFDN-VGDM’s contextually responsive decision-making.Progress has been made, although PFDN-VGDM has lim-itations. The computational load precludes low-processingdevices from using it. Complex visuals test system styletransmission. Its limited customisation may deter designersof different tastes. Testing unstandardised or real-worlddata may be problematic due to dataset volatility. VRinteraction is tough owing to software and hardwaredifferences. Comparing controlled performance metrics toreal-world ones can be challenging, especially in differentsettings. These limits must be overcome for optimal PFDN-VGDM system performance. Future research shouldleverage hardware acceleration or lightweight approachesto boost computation efficiency and device accessibility.For artistic styles, our style transfer technique must capturesubtleties and sophisticated design elements. User controlover system parameters improves adaptability and pleasesdesigners. In future iterations, adaptive learning mayincrease robustness for different datasets and design trends.Real-time collaboration and VR development platformcompatibility ease design. Cross-platform interoperabilityand customisable interface design boost virtual reality use.PFDN-VGDM will become a new VR graphic design toolthat improves visual communication and system efficacyafter fixing these difficulties.FundingThis study was supported by the 2022 Anhui ProvincialQuality Engineering Project, Transformation and upgrad-ing of traditional specialties (No. 2022zygzts091) and the2020 Anhui Quality Engineering Project: Research oncharacteristic teaching reform of visual communicationdesign major in local universities under R+CDIO mode(No. 2020jyxm1953).References[1] Y. Gu, Q. Wang, and W. Gu, the innovative application ofvisual communication design in modern art design, Electronics,12(5), 2023, 1150.[2] R. Mykhailova, O. Abramova, N. Kravchenko, I. Petrova, I.Nebesnyk, and M. Sofilkanych, Modern web design and blogdesign: Virtual reality and augmented reality, BRAIN. BroadResearch in Artificial Intelligence and Neuroscience, 14(3),2023, 394-407.[3] E.H. Korkut, and E. Surer, Visualization in virtual real-ity: A systematic review, Virtual Reality, 27(2), 2023,1447–1480.[4] D. Paes, J. Irizarry, M. Billinghurst, and D. Pujoni, Investigat-ing the relationship between three-dimensional perception andpresence in virtual reality-reconstructed architecture, Appliedergonomics, 109, 2023, 103953.10[5] P.M. Shakeel, and S. Baskar, Automatic human emotionclassification in web document using fuzzy inference system(FIS), International Journal of Technology and HumanInteraction, 16(1), 2020, 94–104.[6] S. Pastel, J. Marlok, N. Bandow, and K. Witte, Application ofeye-tracking systems integrated into immersive virtual realityand possible transfer to the sports sector - A systematic review,Multimedia Tools and Applications, 82(3), 2023, 4181–4208.[7] J. Radianti, T.A. Majchrzak, J. Fromm, and I. Wohlgenannt,A systematic review of immersive virtual reality applicationsfor higher education: Design elements, lessons learned, andresearch agenda, Computers & Education, 147, 2020, 103778.[8] E. Pietroni, and D. Ferdani, Virtual restoration and virtualreconstruction in cultural heritage: Terminology, methodolo-gies, visual representation techniques and cognitive models,Information, 12(4), 2021, 167.
  7. [9] B. He, Application of VR simulation and image opticalprocessing in image visual communication design, Optical andQuantum Electronics, 56(2), 2024, 212.
  8. [10] S. Weber, L. Rudolph, S. Liedtke, C. Eichhorn, D. Dyrda,D.A. Plecher, and G. Klinker, Frameworks enabling ubiquitousmixed reality applications across dynamically adaptable deviceconfigurations, Frontiers in Virtual Reality, 3, 2022.
  9. [11] H. Asadi, T. Bellmann, S. Mohamed, C.P. Lim, A. Khosravi,and S. Nahavandi, Adaptive motion cueing algorithm usingoptimized fuzzy control system for motion simulators, IEEETransactions on Intelligent Vehicles, 8(1), 2023, 390–403.
  10. [12] Y. Wang, M. Sheng, and D.A. Ghani, Virtual reality andaugmented reality-based digital pattern design in the context ofthe blockchain technology framework, Journal of AutonomousIntelligence, 7(5), 2024.
  11. [13] F. Li, Y. Gao, A.´onio Candeias, and Y. Wu, Virtual restorationsystem for 3D digital cultural relics based on a fuzzy logicalgorithm, Systems, 11(7), 2023, 374.
  12. [14] A. Maden, and G.N. Y¨ucenur, Evaluation of sustainablemetaverse characteristics using scenario-based fuzzy cognitivemap, Computers in Human Behavior, 152, 2024, 108090.
  13. [15] S.S. Kumaran, S.J.S. Chelladurai, K.B.B. Narayanan, and T.A.Selvan, Prediction of received signal strength using the fuzzylogic controller for localisation of sensors in mobile robots,International Journal of Robotics and Automation, 39(4), 2024,302–311.
  14. [16] Z. Long, Y. Wang, and Z. Luo, Fuzzy control robot energysaving method based on particle swarm optimisation algorithm,International Journal of Robotics and Automation, 39(6), 2024,482–489.
  15. [18]Noise reduction Bilateral filtering, non-local Means Remove artifacts and noise [1], [6], [18]Resolution adjustment Bicubic interpolation, Super-resolution Optimise image quality for VR display [1], [6], [18]3Figure 1. System architecture for PFDN-VGDM.in a higher-quality VR experience with more complex anddynamic visual aspects.3.1.1 Dynamic SegmentationK-means uses this clustering approach so that the studymay divide the picture into separate parts. Its goal is toreduce the total squared deviation between each pixel andthe centre of its respective cluster to a minimummin{Si}ki=1ki=1 x∈Six−µi2(1)The K-means clustering technique, denoted byequation 1, seeks to minimise the sum of squared distancesbetween each point x and its corresponding centroid. µi . Inthis context, Si represents the i-th segment, and µi is thecentroid of the segment Si . The goal is to identify the bestsegmentation that reduces this total. Using a topographicsurface, where bright pixels signify high elevations, theWatershed algorithm considers the grayscale image as asurface. If two objects overlap, it will be easier to separatethem. Segmentation in PFDN-VGDM is best handled bythe K-means method because of its strength in effectivelyclustering picture pixels based on feature similarity. Thisis essential for separating VR images into meaningfulsegments. It works well in real-time VR settings because toits simplicity, scalability, and ability to minimise the sumof squared distances between cluster centroids and pixels.The study’s objective of improving visual communicationin VR through accurate and adaptive segmentation alignswith K-mean’s computational efficiency and adaptabilityto changing virtual settings.3.1.2 Real-Time Noise ReductionBy averaging pixel values with weighted Gaussiandistributions dependent on spatial and intensity disparities,the Bilateral filtering filter can smooth out images while4Table 3Feature Extraction MethodsFeature Method Output ReferencesShape Contour analysis, Hough transform Geometric primitives, object boundaries [1], [2]Colour Histogram analysis, K-means clustering Colour palette, dominant colours [3], [4]Style Convolutional neural networks Artistic style classification [5], [6]Spatial Graph-based Analysis Relative positions, hierarchical structure [7], [8]maintaining the integrity of their edgesBF[I]p =1Wp q∈sGσs( p − q )Gσr(|Ip − Iq|)Iq (2)Bilateral filtering is represented by (2). At pixel p,the filtered image value is calculated as BF[I]p, which isa weighted sum of the values of surrounding pixels Iq.The spatial Gaussian function Gσsand the range Gaussianfunction Gσrare used to compute the weights, whichdecrease noise while retaining edges.3.1.3 Dynamic Resolution AdjustmentBicubic interpolation is a method that determines thevalue of a new pixel by taking into account the 16pixels that are nearest to it. This method generatessmoother and more precise images than straightforwardinterpolation techniques. Enhanced picture resolutionis achieved by utilising deep learning models in thesuper-resolution process, generating high-quality imagesfrom lower-resolution inputs. Dynamic image processingallows real-time analysis, adaptation, and improvementof visual content, optimising visual communication inVR. It can detect changes, react to human input, andadapt to changing environments in real-time to monitormotion, extract features, and segment images. Advancedalgorithms, such as optical flow for motion analysis, convo-lutional neural networks (CNNs) for feature identification,and probabilistic models for adaptive filtering, ensureconsistent and high-quality visuals regardless of angle,illumination, or input. This constant optimisation makesVR experiences more compelling and user-friendly, clarifiesvisual signals, and allows for design modifications.3.2 Enhanced Feature ExtractionTable 3 shows how graphic designers extract features fromVR photos. Contour analysis and the Hough transformidentify geometric primitives and object boundaries.Histogram analysis and K-means clustering are used todetermine colour palettes and identify dominant hues.CNNs classify creative genres, while graph-based analysisexplains spatial links and hierarchies.3.3 PFDN CreationPFDNs are advanced fuzzy cognitive maps (FCMs)featuring probabilistic modelling, fuzzy logic, and dynamicTable 4Fuzzy Logic OperationsOperation Formula DescriptionAND min(µA (x) , µβ (x)) Intersection of fuzzysetsOR max(µA (x) , µβ (x)) Union of fuzzy setsNOT 1 − µA (x) Complement of fuzzysetnetwork dynamics. Fuzzy control logic represents uncer-tain relationships between variables or concepts in adynamic system, while probabilistic techniques capturestochasticity. Activating nodes and updating edge weightsupdate variable or idea weighted edges. Model andanalyse complex, unexpected, and dynamic PFDN interac-tions. Reducing uncertainty improves forecasting, scenarioanalysis, and prediction. Fuzzy logic produces PFDNCreation’s relational model from features. It weights edges,determines node interactions, and designs nodes. VR designdecisions that address ambiguity and complexity providea flexible, context-aware system that optimises visualcommunication.Table 4 shows how the PFDN represents complex andsubjective idea connections using fuzzy logic operations likeAND, OR, and NOT. These systems describe interactionsin detail, making VR judgments dynamic and context-aware. Fuzzy logic offers partial membership instead ofbinary logic, making notion linkages more adaptable. Thisis needed in complex systems like virtual reality becauserelationships are intricateR (x, y) = {((x, y) , µR (x, y)|x, y) ∈ X × Y } (3)The fuzzy relationship between x and y is described by(3), which measures association. µR(x, y) Next, it providesa formal framework for showing and measuring FCMcomponent connection’s type and intensity. The PFDN forthe virtual graphic design map (PFDM-VGDM) systemcan integrate a fuzzy controller by adding a fuzzy logiccomponent for VR design decisions. User perspective (UP),VR environment parameters (VRE), style recognition score(SR), colour analysis findings (CA), and shape detectionaccuracy (SD) feed this module’s controller. This node isbetween design element and visual quality. fuzzy principlesgenerate design element adjustments. In that order, inputand output fuzzy sets are Low, Medium, High, Minor,5Figure 2. PFDN for PFDN-VGDM.Moderate, and Major. To maximise VR visual quality, thefuzzy controller fuzzifies inputs, applies rules, aggregatesoutputs, and defuzzifies using triangle or trapezoidalmembership functions to generate a crisp DEA valueAi (k + 1) = f Ai (k) +j=1Aj (k) .wji++ FC(UP, VRE, SR, CA, SD) (4)An explanation of the state update method for anode i in the PFDM is given by (4). At time stepk + 1, node i’s state is determined by its current stateand the weighted effect of other nodes. Subsequently, itrecords the system’s dynamic behaviour, revealing howeach concept’s state changes over time in response to inputsfrom different concepts. FC denotes the fuzzy controllerfunction that produces the DEA value. The VR designprocess may make more sophisticated and context-awaredecisions thanks to the fuzzy controller’s integration withthe PFDM. The VR graphic design system can achievemore responsive and adaptive design optimisations becauseit manages the inherent uncertainties in user perceptionand environmental conditions.The PFDN structure is defined as the aspects of designand environmental factors are represented by the nodeslabelled as N = {n1, n2, ..., nk}. As a representation ofprobabilistic fuzzy relationships between nodes n, edgesE = {eij} were created. For every edge eij, there is aweight wij and a probability distribution that is linked withit [pij]. Each node in the network is denoted by the variableni. The variable eij represents the edge that connects node ito node j. The variable wij indicates the weight of the edgethat connects node i to node j. The symbol pij denotesa probability distribution associated with the edge thatconnects node i to node j.The PFDN uses edge weights and probabilitydistributions to depict fuzzy design element-environmentalparameter links and their uncertainty. Nodes in VR designindicate colour palettes, area layouts, and user interactions.Edges show effect strength and probability distributionsshowing variability or uncertainty in how these factorsinteract with weights. Figure 2 shows PFDN, a VR graphicdesign system. Dynamic segmentation, noise reduction,and resolution adjustment are controlled by a map’s startinput node. The steps affect spatial relationship, stylerecognition, colour analysis, and shape detection. Theseassessments feed the design element node, which theuser perspective modifies. Design element-VR environmentinteractions effect VR. VR setup and visual quality affect6Figure 3. Advanced VR Optimisation.Table 5Optimisation CriteriaAspect Criteria MeasurementLayout Balance, symmetry,golden ratiospatial distributionscoreColour Harmony, contrast,accessibilityColour harmonyindexInteraction Ergonomics,intuitivenessUser effortestimationuser experience. User Experience data from performanceoptimisation can feed back to the design element and VRE.This iterative approach improves VR by modifying userfeedback and system performance in real time. Multipleinputs are integrated via ‘+’ nodes. This map shows therelevance of user feedback and steady advances in VRgraphic design system development and optimisation.3.4 Advanced VR OptimisationFigure 3 illustrates the simplicity of PFDN’s VR designoptimisation. The workflow begins with fuzzy cognitivemaps, a cluster of interconnected nodes that drivedecision-making. A design interface icon represents layoutoptimisation using this input. The palette and brushtool optimise colour for aesthetics. Next, several screensoptimise interaction to improve user experience. Finally,these refined elements form a globe with gears and aspeedometer, symbolising a refined, efficient, and globallyrelevant VR experience. This systematic technique hasgreatly improved VR design.In Table 5, the layout criteria used in VR designoptimisation include symmetry, balance, and the goldenratio, quantified by spatial distribution scores. Colourstandards include accessibility, harmony, and contrast,measured using a colour harmony index. The user effortestimation assesses the ergonomics and intuitiveness of theinteraction criteriaB = 1 −| (wi.xi)|wi+| (wi.yi)|wi2(5)Table 6Compression TechniquesData Type Technique Compression RatioGeometry Mesh simplification,quantisation10:1 - 20:1Textures ASTC, ETC2, BC7 4:1 - 8:1Interactions Keyframe reduction,Bezier curves5:1 - 10:1In (5), visual equilibrium is achieved by calculatinglayout balance B by analysing the weighted placementsof pieces. An optimal spatial distribution score for a well-balanced design is generated by dividing the sum of allvisual weights wi by the product of their placements (xi,yi) and then dividing that result by the total weightsH = 1 −|∆E∗ab|n.ma x (∆E∗ab)(6)In (6), the evaluation of colour disparities (∆E∗ab) inCIELAB space is used to calculate colour harmony H.To ensure the colour scheme is visually beautiful, theformula calculates the discordance by adding up all theabsolute colour differences and then normalising themagainst the maximum difference and the number ofsamples n.3.5 Scalable Compression SystemThe scalable compression system optimises VR designby reducing data size without compromising quality,compressing geometry, textures, and interaction data.Table 6 shows that geometry mesh simplification andquantisation, texture formats including ASTC, ETC2,and BC7, interaction keyframe reduction, and Beziercurves produce 4:1 to 20:1 compression ratios. Table 6shows how VR graphics data redundancy, image quality,texture complexity, and encoding affect compressionratio. Low compression ratios for high-resolution imagesand detailed textures increase data volume since theycannot be compressed without quality loss. Effectiveencoding and visual content redundancy boost compressionratios, lowering data size without affecting quality.Increased compression ratios reduce storage and band-width needs, boosting VR data streaming and real-timerendering.3.6 Environment Simulation EngineThe Environment simulation engine maximises perfor-mance for visual fidelity and smooth operation, creatinga realistic virtual world. It calculates complex lighting,shadows, and object interactions at high frame ratesto reduce user disorientation. Virtual realities usethis intricate technique to generate living, breathingworlds.7Algorithm 1 Adaptive VR Design OptimisationInput: VR Image1Output: Optimised VR DesignStep 1: Image Preprocessingprocessed image = preprocess image(VR Image)Step 2: Feature Extractionfeatures = extract features(processed image)Step 3: Initialise PFDNFCM = initialise PFDN()Step 4: Main Optimisation Loop:for iteration = 1 to max iterations doUP = get user perspective()VRE = analyse vr environment()SR, CA, SD = features[2], features[1], features[0]DEA = fuzzy controller(UP, VRE, SR, CA, SD)for i = 1 to num nodes doweighted sum = 0for j = 1 to num nodes doweighted sum += weights[j][i] FCM[j]end forFCM[i] = fuzzy inference(FCM[i] + weighted sum +DEA[i])end forif convergence reached(FCM) thenbreakend ifend forStep 5: Generate Optimised Design:Optimised VR Design = generate design(FCM)Step 6: Return Optimised VR DesignPFDN-VGDM algorithm 1 optimises VR designsby processing and refining input visuals. Imagepreprocessing—adjusting resolution, eliminating noise, andsegmenting—begins. Feature eExtraction then identifiesshape, colour, style, and spatiality. When the iterativeincrease in design efficiency or accuracy falls belowa predetermined threshold, Algorithm 1’s convergencecondition for PFDN-VGDM optimisation is satisfied,meaning that additional modifications do not result insignificant benefits. As soon as an ideal or nearly optimalsolution is found, the optimisation process will end,according to this condition. Improved efficiency metricslike reduced computational load, faster rendering times,and enhanced user satisfaction within VR environments,along with high accuracy in shape detection, colouranalysis, style classification, and interactive performance,indicate an optimised design. The Fuzzy controller thenapplies fuzzy logic principles to these features, consideringthe user’s perspective and VR environment settings tocalculate a design element adjustment (DEA) value. Thesefeatures, along with the DEA value update, refine the fuzzycognitive map (FCM) through its nodes. User feedbackand environmental analysis are used to update the FCMin the Main PFDN-VGDM Algorithm until convergence.The VR design is optimised and ready for execution. Thismethod’s flexible and adaptive design enhances the visualsof VR apps.4. Simulation of PFDM-VGDMDataset StudyThe study utilised the open photographs dataset V7 [24],which comprises a large number of annotated photographs.This dataset consists of numerous objects with richannotations, making it particularly useful for visualunderstanding, object detection, and segmentation. Thecollection labels item bounding boxes, segmentation masks,and object relationships. The paper analyses how thetechnology of fuzzy control-based virtual graphic designmap (PFDM-VGDM) improves VR visual communication.VR headsets Oculus Quest 2 and HTC Vive Pro, high-performance PCs with Intel i9 CPUs and NVIDIA RTX3080 GPUs, and Unity 3D (2021.2) for virtual worlds wereused in the experiment MATLAB R2021b was used forfuzzy logic techniques and image processing. In contrast,Python 3.9 with TensorFlow and PyTorch was used forstyle recognition and super-resolution. The PFDN-VGDMsystem was tested on 1000 logos, brand identities, productpackaging, poster and commercial designs, user interface,and web designs.Comparison StudyCompare the model to others to prove its value.This comparative study uses shape detection accuracy,colour analysis accuracy, style classification accuracy,spatial relationship accuracy, average quality improvement,average user satisfaction, virtual environment performance,and static display performance to evaluate visual com-munication design. These algorithms include VRVC [20],DTVD
  16. [20],DTVD [19], and MVDA [18].4.1 Average Quality ImprovementAverage quality improvement is the overall qualityimprovement over time. It evaluates average performanceor effectiveness improvement across parameters. Thisenhancement can be quantified by accuracy, dependability,usability, and efficiency. Tracking these gains over timeenables firms to evaluate their processes, products, andservices, making informed decisions to enhance qualityand customer satisfaction. These are the individual qualityparameters that are included in the quality metrics:Efficiency (E), Accuracy (A) Dependability (D) andUsability (U)Average Quality Improvement =1nni=1Qi,new − Qi,baseQi,base(7)In (7), n is the sum of the quality metrics duringthe evaluation periods is divided by the total numberof metrics to get the average quality improvement. Themethod helps with quality and user satisfaction strategyselections by evaluating overall effectiveness enhancementsacross metrics.8Figure 4. Average quality Improvement.In Figure 4, the Average quality improvement for thePFDN-VGDM framework measures accuracy, reliability,usability, and efficiency gains over evaluation periods. At14, DTVD 10, VRVC 13, and PFDN-VGDM20, MVDAis poorly improved. MVDA is 18, DTVD 12, VRVC 18,and PFDN-VGDM25. High MVDA, DTVD, VRVC, andPFDN-VGDM levels are 20, 19, 21, and 35. Measurementsof framework efficacy inform strategic decisions to improvequality and user happiness.4.2 Design Efficiency ImprovementDesign efficiency improvement involves improving designprocesses, workflows, and techniques to boost productivity,quality, and effectiveness. Optimising design resources,integrating modern technologies such as automation andAI, refining workflows, and continually improving designpractices are key components. Reducing time-to-market,mistakes, resource utilisation, and design performance arethe primary objectives to meet project goals and satisfystakeholder expectations. Efficiency Metrics: This includesmetrics such as machine vision-based design analysis(MVDA), digital technology in visual communicationDesign (DTVD), VR for visual communication (VRVC)and fuzzy control logic-based virtual graphic design map(PFDM-VGDM)Efficiency improvement =ni=1 ωiρini=1 ωi(8)where ωi is the weight optimisation criterion, ρi is theperformance improvement from the ith criterion. (8) is thecomputation process involves calculating efficiency metrics(MVDA, DTVD, VRVC, PFDM-VGDM) daily, weekly,and monthly, summarising them for each period, anddividing the sum by the number of metrics to calculate theaverage efficiency improvement.In Fig. 5, the performance enhancements in the designwere examined using the MVDA, DTVD, VRVC, andPFDN-VGDM algorithms. Metrics are reasonable daily,with PFDN-VGDM having the best score at 8. Every week,things get a little better, and PFDN-VGDM reaches 9.With a monthly performance of 9, PFDN-VGDMis is stillleading the pack. These numbers prove that PFDN-VGDMis the best at increasing design efficiency, which bodes wellfor the future of design processes.Figure 5. Design efficiency improvement.4.3 Interactive Environment PerformanceAn Interactive environment performance measures indicatehow interactive systems or environments work in real-time. Subsequently, it evaluates interactive element’sresponsiveness, user experience, reliability, and adaptationin digital or physical environments. The assessmenthelps determine how interactive systems match userexpectations, optimise usability, and ensure seamless userengagement.Interactive Environment Performance(IEP)= i (wi · Pi)i wi× 100% (9)In (9), interactive environment performance, or IEP,the weight of the i-th performance metric is equalto wi.The score of the i-th performance metric isrepresented by Pi. i = {Adaptation, user experience,reliability, and respondents}. When solving (12), theweights (wi) for interactive environment performance(IEP) are calculated according to the significance ofeach performance parameter in the particular interactiveVR setting. Responsiveness, reliability, user experience,and adaptation are common measures in this category.Expert opinion, empirical research, or user input determinethe weights, which represent the relative importance ofeach parameter for a smooth interactive experience. Theweights could change depending on the situation sincevarious virtual reality uses (such training simulations vs.entertainment) could place different values on things likeresponsiveness and user experience. The IEP computationis guaranteed to reflect performance in a variety ofinteractive contexts by adjusting these weights.In Fig. 6, design system’s performance in interactiveVR settings is measured by interactive environmentperformance. With 90% performance, PFDN-VGDM beatsall other systems in basic situations. PFDN-VGDM leads tomoderately complicated situations with 85% performance.PFDN-VGDM improves to 87% in difficult situations. Thisdata shows that PFDN-VGDM is the best solution forinteractive VR designers since it can manage projects9Figure 6. Interactive environment performance.Figure 7. Static display performance.of all levels of complexity. High and consistent systemperformance is maintained throughout complexity levels.4.4 Static Display PerformanceSDPcontext =SPcontextMPcontext× 100 (10)In (10), SDPcontext represent static display perfor-mance for a specific context (2D, 3D, or VR), SPcontextrepresents system performance in the specific context,MPcontext represents maximum possible performance in thespecific context, and context = {2D, 3D, VR}.In Fig. 7, the data compares design systems in 2D, 3D,and VR static display contexts. PFDN-VGDM surpassesall 2D systems with 88% performance, whereas MVDAstruggles the most. PFDN-VGDM leads in 3D with 90%,somewhat better than 2D. PFDN-VGDM performs best inVR at 92%, with VRVC improving. MVDA [18] improvesthe most from 2D to VR but performs worst acrossall categories. VRVC improves VR static displays, whileDTVD performs consistently across all screens. PFDN-VGDM gives designers the versatile and effective option ofworking with static visual elements across platforms.5. ConclusionThe probabilistic fuzzy dynamic network-based virtualgraphic design map greatly improves VR graphics. VRvisual communication issues are addressed by intelligentfeature extraction, fuzzy cognitive mapping, and dynamicpicture processing. Enhanced visual quality, featureanalysis, design efficiency, and VR adaptation are thekey benefits. Complex visual data designers benefit fromPFDN-VGDM’s contextually responsive decision-making.Progress has been made, although PFDN-VGDM has lim-itations. The computational load precludes low-processingdevices from using it. Complex visuals test system styletransmission. Its limited customisation may deter designersof different tastes. Testing unstandardised or real-worlddata may be problematic due to dataset volatility. VRinteraction is tough owing to software and hardwaredifferences. Comparing controlled performance metrics toreal-world ones can be challenging, especially in differentsettings. These limits must be overcome for optimal PFDN-VGDM system performance. Future research shouldleverage hardware acceleration or lightweight approachesto boost computation efficiency and device accessibility.For artistic styles, our style transfer technique must capturesubtleties and sophisticated design elements. User controlover system parameters improves adaptability and pleasesdesigners. In future iterations, adaptive learning mayincrease robustness for different datasets and design trends.Real-time collaboration and VR development platformcompatibility ease design. Cross-platform interoperabilityand customisable interface design boost virtual reality use.PFDN-VGDM will become a new VR graphic design toolthat improves visual communication and system efficacyafter fixing these difficulties.FundingThis study was supported by the 2022 Anhui ProvincialQuality Engineering Project, Transformation and upgrad-ing of traditional specialties (No. 2022zygzts091) and the2020 Anhui Quality Engineering Project: Research oncharacteristic teaching reform of visual communicationdesign major in local universities under R+CDIO mode(No. 2020jyxm1953).References[1] Y. Gu, Q. Wang, and W. Gu, the innovative application ofvisual communication design in modern art design, Electronics,12(5), 2023, 1150.[2] R. Mykhailova, O. Abramova, N. Kravchenko, I. Petrova, I.Nebesnyk, and M. Sofilkanych, Modern web design and blogdesign: Virtual reality and augmented reality, BRAIN. BroadResearch in Artificial Intelligence and Neuroscience, 14(3),2023, 394-407.[3] E.H. Korkut, and E. Surer, Visualization in virtual real-ity: A systematic review, Virtual Reality, 27(2), 2023,1447–1480.[4] D. Paes, J. Irizarry, M. Billinghurst, and D. Pujoni, Investigat-ing the relationship between three-dimensional perception andpresence in virtual reality-reconstructed architecture, Appliedergonomics, 109, 2023, 103953.10[5] P.M. Shakeel, and S. Baskar, Automatic human emotionclassification in web document using fuzzy inference system(FIS), International Journal of Technology and HumanInteraction, 16(1), 2020, 94–104.[6] S. Pastel, J. Marlok, N. Bandow, and K. Witte, Application ofeye-tracking systems integrated into immersive virtual realityand possible transfer to the sports sector - A systematic review,Multimedia Tools and Applications, 82(3), 2023, 4181–4208.[7] J. Radianti, T.A. Majchrzak, J. Fromm, and I. Wohlgenannt,A systematic review of immersive virtual reality applicationsfor higher education: Design elements, lessons learned, andresearch agenda, Computers & Education, 147, 2020, 103778.[8] E. Pietroni, and D. Ferdani, Virtual restoration and virtualreconstruction in cultural heritage: Terminology, methodolo-gies, visual representation techniques and cognitive models,Information, 12(4), 2021, 167.[9] B. He, Application of VR simulation and image opticalprocessing in image visual communication design, Optical andQuantum Electronics, 56(2), 2024, 212.[10] S. Weber, L. Rudolph, S. Liedtke, C. Eichhorn, D. Dyrda,D.A. Plecher, and G. Klinker, Frameworks enabling ubiquitousmixed reality applications across dynamically adaptable deviceconfigurations, Frontiers in Virtual Reality, 3, 2022.[11] H. Asadi, T. Bellmann, S. Mohamed, C.P. Lim, A. Khosravi,and S. Nahavandi, Adaptive motion cueing algorithm usingoptimized fuzzy control system for motion simulators, IEEETransactions on Intelligent Vehicles, 8(1), 2023, 390–403.[12] Y. Wang, M. Sheng, and D.A. Ghani, Virtual reality andaugmented reality-based digital pattern design in the context ofthe blockchain technology framework, Journal of AutonomousIntelligence, 7(5), 2024.[13] F. Li, Y. Gao, A.´onio Candeias, and Y. Wu, Virtual restorationsystem for 3D digital cultural relics based on a fuzzy logicalgorithm, Systems, 11(7), 2023, 374.[14] A. Maden, and G.N. Y¨ucenur, Evaluation of sustainablemetaverse characteristics using scenario-based fuzzy cognitivemap, Computers in Human Behavior, 152, 2024, 108090.[15] S.S. Kumaran, S.J.S. Chelladurai, K.B.B. Narayanan, and T.A.Selvan, Prediction of received signal strength using the fuzzylogic controller for localisation of sensors in mobile robots,International Journal of Robotics and Automation, 39(4), 2024,302–311.[16] Z. Long, Y. Wang, and Z. Luo, Fuzzy control robot energysaving method based on particle swarm optimisation algorithm,International Journal of Robotics and Automation, 39(6), 2024,482–489.[17] Y. Wang, J.R. Chardonnet, F. Merienne, and J. Ovtcharova,Using fuzzy logic to involve individual differences for predictingcybersickness during VR navigation, In 2021 IEEE VirtualReality and 3D User Interfaces (VR), Lisboa, 2021, 373–381.[18] X. Guan, and K. Wang, Visual communication design usingmachine vision and digital media communication technology,Wireless Communications and Mobile Computing, 2022, 2022,1–11.[19] Q. Sun, and Y. Zhu, Teaching analysis for visual commu-nication design with the perspective of digital technology,Computational and mathematical methods in medicine, 2022,2022, 2411811.[20] N.M. Sooter, and G. Ugazio, Virtual reality for philanthropy: Apromising tool to innovate fundraising, Judgment and DecisionMaking, 18, 2023.
  17. [21] B. Schone, J. Kisker, R.S. Sylvester, E.L. Radtke, andT. Gruber, library for universal virtual reality experiments(luVRe): A standardized immersive 3D/360 picture and videodatabase for VR-based research, Current Psychology, 42(7),2023, 5366–5384.
  18. [22] J.R.J. Neo, A.S. Won, and M.M.C. Shepley, Designingimmersive virtual environments for human behavior research,Frontiers in Virtual Reality, 2, 2021.
  19. [24],which comprises a large number of annotated photographs.This dataset consists of numerous objects with richannotations, making it particularly useful for visualunderstanding, object detection, and segmentation. Thecollection labels item bounding boxes, segmentation masks,and object relationships. The paper analyses how thetechnology of fuzzy control-based virtual graphic designmap (PFDM-VGDM) improves VR visual communication.VR headsets Oculus Quest 2 and HTC Vive Pro, high-performance PCs with Intel i9 CPUs and NVIDIA RTX3080 GPUs, and Unity 3D (2021.2) for virtual worlds wereused in the experiment MATLAB R2021b was used forfuzzy logic techniques and image processing. In contrast,Python 3.9 with TensorFlow and PyTorch was used forstyle recognition and super-resolution. The PFDN-VGDMsystem was tested on 1000 logos, brand identities, productpackaging, poster and commercial designs, user interface,and web designs.Comparison StudyCompare the model to others to prove its value.This comparative study uses shape detection accuracy,colour analysis accuracy, style classification accuracy,spatial relationship accuracy, average quality improvement,average user satisfaction, virtual environment performance,and static display performance to evaluate visual com-munication design. These algorithms include VRVC [20],DTVD [19], and MVDA [18].4.1 Average Quality ImprovementAverage quality improvement is the overall qualityimprovement over time. It evaluates average performanceor effectiveness improvement across parameters. Thisenhancement can be quantified by accuracy, dependability,usability, and efficiency. Tracking these gains over timeenables firms to evaluate their processes, products, andservices, making informed decisions to enhance qualityand customer satisfaction. These are the individual qualityparameters that are included in the quality metrics:Efficiency (E), Accuracy (A) Dependability (D) andUsability (U)Average Quality Improvement =1nni=1Qi,new − Qi,baseQi,base(7)In (7), n is the sum of the quality metrics duringthe evaluation periods is divided by the total numberof metrics to get the average quality improvement. Themethod helps with quality and user satisfaction strategyselections by evaluating overall effectiveness enhancementsacross metrics.8Figure 4. Average quality Improvement.In Figure 4, the Average quality improvement for thePFDN-VGDM framework measures accuracy, reliability,usability, and efficiency gains over evaluation periods. At14, DTVD 10, VRVC 13, and PFDN-VGDM20, MVDAis poorly improved. MVDA is 18, DTVD 12, VRVC 18,and PFDN-VGDM25. High MVDA, DTVD, VRVC, andPFDN-VGDM levels are 20, 19, 21, and 35. Measurementsof framework efficacy inform strategic decisions to improvequality and user happiness.4.2 Design Efficiency ImprovementDesign efficiency improvement involves improving designprocesses, workflows, and techniques to boost productivity,quality, and effectiveness. Optimising design resources,integrating modern technologies such as automation andAI, refining workflows, and continually improving designpractices are key components. Reducing time-to-market,mistakes, resource utilisation, and design performance arethe primary objectives to meet project goals and satisfystakeholder expectations. Efficiency Metrics: This includesmetrics such as machine vision-based design analysis(MVDA), digital technology in visual communicationDesign (DTVD), VR for visual communication (VRVC)and fuzzy control logic-based virtual graphic design map(PFDM-VGDM)Efficiency improvement =ni=1 ωiρini=1 ωi(8)where ωi is the weight optimisation criterion, ρi is theperformance improvement from the ith criterion. (8) is thecomputation process involves calculating efficiency metrics(MVDA, DTVD, VRVC, PFDM-VGDM) daily, weekly,and monthly, summarising them for each period, anddividing the sum by the number of metrics to calculate theaverage efficiency improvement.In Fig. 5, the performance enhancements in the designwere examined using the MVDA, DTVD, VRVC, andPFDN-VGDM algorithms. Metrics are reasonable daily,with PFDN-VGDM having the best score at 8. Every week,things get a little better, and PFDN-VGDM reaches 9.With a monthly performance of 9, PFDN-VGDMis is stillleading the pack. These numbers prove that PFDN-VGDMis the best at increasing design efficiency, which bodes wellfor the future of design processes.Figure 5. Design efficiency improvement.4.3 Interactive Environment PerformanceAn Interactive environment performance measures indicatehow interactive systems or environments work in real-time. Subsequently, it evaluates interactive element’sresponsiveness, user experience, reliability, and adaptationin digital or physical environments. The assessmenthelps determine how interactive systems match userexpectations, optimise usability, and ensure seamless userengagement.Interactive Environment Performance(IEP)= i (wi · Pi)i wi× 100% (9)In (9), interactive environment performance, or IEP,the weight of the i-th performance metric is equalto wi.The score of the i-th performance metric isrepresented by Pi. i = {Adaptation, user experience,reliability, and respondents}. When solving (12), theweights (wi) for interactive environment performance(IEP) are calculated according to the significance ofeach performance parameter in the particular interactiveVR setting. Responsiveness, reliability, user experience,and adaptation are common measures in this category.Expert opinion, empirical research, or user input determinethe weights, which represent the relative importance ofeach parameter for a smooth interactive experience. Theweights could change depending on the situation sincevarious virtual reality uses (such training simulations vs.entertainment) could place different values on things likeresponsiveness and user experience. The IEP computationis guaranteed to reflect performance in a variety ofinteractive contexts by adjusting these weights.In Fig. 6, design system’s performance in interactiveVR settings is measured by interactive environmentperformance. With 90% performance, PFDN-VGDM beatsall other systems in basic situations. PFDN-VGDM leads tomoderately complicated situations with 85% performance.PFDN-VGDM improves to 87% in difficult situations. Thisdata shows that PFDN-VGDM is the best solution forinteractive VR designers since it can manage projects9Figure 6. Interactive environment performance.Figure 7. Static display performance.of all levels of complexity. High and consistent systemperformance is maintained throughout complexity levels.4.4 Static Display PerformanceSDPcontext =SPcontextMPcontext× 100 (10)In (10), SDPcontext represent static display perfor-mance for a specific context (2D, 3D, or VR), SPcontextrepresents system performance in the specific context,MPcontext represents maximum possible performance in thespecific context, and context = {2D, 3D, VR}.In Fig. 7, the data compares design systems in 2D, 3D,and VR static display contexts. PFDN-VGDM surpassesall 2D systems with 88% performance, whereas MVDAstruggles the most. PFDN-VGDM leads in 3D with 90%,somewhat better than 2D. PFDN-VGDM performs best inVR at 92%, with VRVC improving. MVDA [18] improvesthe most from 2D to VR but performs worst acrossall categories. VRVC improves VR static displays, whileDTVD performs consistently across all screens. PFDN-VGDM gives designers the versatile and effective option ofworking with static visual elements across platforms.5. ConclusionThe probabilistic fuzzy dynamic network-based virtualgraphic design map greatly improves VR graphics. VRvisual communication issues are addressed by intelligentfeature extraction, fuzzy cognitive mapping, and dynamicpicture processing. Enhanced visual quality, featureanalysis, design efficiency, and VR adaptation are thekey benefits. Complex visual data designers benefit fromPFDN-VGDM’s contextually responsive decision-making.Progress has been made, although PFDN-VGDM has lim-itations. The computational load precludes low-processingdevices from using it. Complex visuals test system styletransmission. Its limited customisation may deter designersof different tastes. Testing unstandardised or real-worlddata may be problematic due to dataset volatility. VRinteraction is tough owing to software and hardwaredifferences. Comparing controlled performance metrics toreal-world ones can be challenging, especially in differentsettings. These limits must be overcome for optimal PFDN-VGDM system performance. Future research shouldleverage hardware acceleration or lightweight approachesto boost computation efficiency and device accessibility.For artistic styles, our style transfer technique must capturesubtleties and sophisticated design elements. User controlover system parameters improves adaptability and pleasesdesigners. In future iterations, adaptive learning mayincrease robustness for different datasets and design trends.Real-time collaboration and VR development platformcompatibility ease design. Cross-platform interoperabilityand customisable interface design boost virtual reality use.PFDN-VGDM will become a new VR graphic design toolthat improves visual communication and system efficacyafter fixing these difficulties.FundingThis study was supported by the 2022 Anhui ProvincialQuality Engineering Project, Transformation and upgrad-ing of traditional specialties (No. 2022zygzts091) and the2020 Anhui Quality Engineering Project: Research oncharacteristic teaching reform of visual communicationdesign major in local universities under R+CDIO mode(No. 2020jyxm1953).References[1] Y. Gu, Q. Wang, and W. Gu, the innovative application ofvisual communication design in modern art design, Electronics,12(5), 2023, 1150.[2] R. Mykhailova, O. Abramova, N. Kravchenko, I. Petrova, I.Nebesnyk, and M. Sofilkanych, Modern web design and blogdesign: Virtual reality and augmented reality, BRAIN. BroadResearch in Artificial Intelligence and Neuroscience, 14(3),2023, 394-407.[3] E.H. Korkut, and E. Surer, Visualization in virtual real-ity: A systematic review, Virtual Reality, 27(2), 2023,1447–1480.[4] D. Paes, J. Irizarry, M. Billinghurst, and D. Pujoni, Investigat-ing the relationship between three-dimensional perception andpresence in virtual reality-reconstructed architecture, Appliedergonomics, 109, 2023, 103953.10[5] P.M. Shakeel, and S. Baskar, Automatic human emotionclassification in web document using fuzzy inference system(FIS), International Journal of Technology and HumanInteraction, 16(1), 2020, 94–104.[6] S. Pastel, J. Marlok, N. Bandow, and K. Witte, Application ofeye-tracking systems integrated into immersive virtual realityand possible transfer to the sports sector - A systematic review,Multimedia Tools and Applications, 82(3), 2023, 4181–4208.[7] J. Radianti, T.A. Majchrzak, J. Fromm, and I. Wohlgenannt,A systematic review of immersive virtual reality applicationsfor higher education: Design elements, lessons learned, andresearch agenda, Computers & Education, 147, 2020, 103778.[8] E. Pietroni, and D. Ferdani, Virtual restoration and virtualreconstruction in cultural heritage: Terminology, methodolo-gies, visual representation techniques and cognitive models,Information, 12(4), 2021, 167.[9] B. He, Application of VR simulation and image opticalprocessing in image visual communication design, Optical andQuantum Electronics, 56(2), 2024, 212.[10] S. Weber, L. Rudolph, S. Liedtke, C. Eichhorn, D. Dyrda,D.A. Plecher, and G. Klinker, Frameworks enabling ubiquitousmixed reality applications across dynamically adaptable deviceconfigurations, Frontiers in Virtual Reality, 3, 2022.[11] H. Asadi, T. Bellmann, S. Mohamed, C.P. Lim, A. Khosravi,and S. Nahavandi, Adaptive motion cueing algorithm usingoptimized fuzzy control system for motion simulators, IEEETransactions on Intelligent Vehicles, 8(1), 2023, 390–403.[12] Y. Wang, M. Sheng, and D.A. Ghani, Virtual reality andaugmented reality-based digital pattern design in the context ofthe blockchain technology framework, Journal of AutonomousIntelligence, 7(5), 2024.[13] F. Li, Y. Gao, A.´onio Candeias, and Y. Wu, Virtual restorationsystem for 3D digital cultural relics based on a fuzzy logicalgorithm, Systems, 11(7), 2023, 374.[14] A. Maden, and G.N. Y¨ucenur, Evaluation of sustainablemetaverse characteristics using scenario-based fuzzy cognitivemap, Computers in Human Behavior, 152, 2024, 108090.[15] S.S. Kumaran, S.J.S. Chelladurai, K.B.B. Narayanan, and T.A.Selvan, Prediction of received signal strength using the fuzzylogic controller for localisation of sensors in mobile robots,International Journal of Robotics and Automation, 39(4), 2024,302–311.[16] Z. Long, Y. Wang, and Z. Luo, Fuzzy control robot energysaving method based on particle swarm optimisation algorithm,International Journal of Robotics and Automation, 39(6), 2024,482–489.[17] Y. Wang, J.R. Chardonnet, F. Merienne, and J. Ovtcharova,Using fuzzy logic to involve individual differences for predictingcybersickness during VR navigation, In 2021 IEEE VirtualReality and 3D User Interfaces (VR), Lisboa, 2021, 373–381.[18] X. Guan, and K. Wang, Visual communication design usingmachine vision and digital media communication technology,Wireless Communications and Mobile Computing, 2022, 2022,1–11.[19] Q. Sun, and Y. Zhu, Teaching analysis for visual commu-nication design with the perspective of digital technology,Computational and mathematical methods in medicine, 2022,2022, 2411811.[20] N.M. Sooter, and G. Ugazio, Virtual reality for philanthropy: Apromising tool to innovate fundraising, Judgment and DecisionMaking, 18, 2023.[21] B. Schone, J. Kisker, R.S. Sylvester, E.L. Radtke, andT. Gruber, library for universal virtual reality experiments(luVRe): A standardized immersive 3D/360 picture and videodatabase for VR-based research, Current Psychology, 42(7),2023, 5366–5384.[22] J.R.J. Neo, A.S. Won, and M.M.C. Shepley, Designingimmersive virtual environments for human behavior research,Frontiers in Virtual Reality, 2, 2021.[23] H. Liu, Z. Wang, A. Mazumdar, and C. Mousas, Virtual realitygame level layout design for real environment constraints,Graphics and Visual Computing, 4, 2021, 200020.[24] Y. Bouteraa, I. Ben Abdallah, A. Ibrahim, and T.A.Ahanger, Development of an IoT-based solution incorporatingbiofeedback and fuzzy logic control for elbow rehabilitation,Applied Sciences, 10(21), 2020, 7793.
  20. [25] O. Orang, P.C. de Lima e Silva, and F.G. Guimar˜aes,Time series forecasting using fuzzy cognitive maps: asurvey, Artificial Intelligence Review, 56(8), 2023, 7733–7794.https://storage.googleapis.com/openimages/web/downloadv7.html#dense-labels-subset

Important Links:

Go Back