Abstract: A method and system (100) of generating a user interface (UI) layout (400) is disclosed. A processor (104) receives one or more prompts (202) in natural language specifying at least one UI requirement from a user. The processor (104) determines an intermediate representation by prompting a fine-tuned Large Language Model (LLM) using the one or more user prompts (202). The processor (104) determines a set of instructions corresponding to the intermediate representation. The processor (104) renders the set of instructions into the UI layout (400) using a rendering engine. The processor (104) generates the UI layout (400) based on rendering of the set of instructions. The set of instructions is displayed in a preview section of the UI layout (400) corresponding to the set of instructions to allow the user to evaluate the generation of the UI layout (400) in real-time. [To be published with FIG. 2]
Description:DESCRIPTION
TECHNICAL FIELD
[0001] This disclosure relates generally to a user interface (UI) layout, and more particularly to a method and system of generating the UI layout by using a plurality of large language models (LLMs).
BACKGROUND
[0002] The process of designing a user interface (UI) necessitates a designer to compose a series of codes, which may include a variety of design requirements based on needs of an organization. In numerous instances, designers and focus groups struggle to strike a balance between backend and frontend frameworks. For instance, focus groups are primarily concerned with the products or services offered by the organization, thus ensuring that the UI layout aligns accordingly. Additionally, designers tend to prioritize the visual aspects of the UI layout and may lack the ability to grasp user needs. For example, if a user is colorblind, the designer might not be adept at creating a layout accommodating such a condition. Consequently, low-fidelity prototypes are produced, often diverging significantly from the original authorial intent. As a result, a user-centric customization may be lacking, as the customization of the UI layout is largely dependent on the designer, given the intricate nature and volume of the codes.
[0003] The prevailing methodologies necessitate human intervention to construct the UI layout by manually composing code and repeatedly iterating the coding process, informed by feedback obtained manually from focus groups. This results in a laborious and error-prone process. Such complexity impedes the efficiency and dependability of the UI layout generation practices.
[0004] Therefore, there is a need for an efficient methodology of generating the UI layout by using large learning models (LLMs).
SUMMARY OF THE INVENTION
[0005] In an embodiment, a method of generating a user interface (UI) layout is disclosed. The method may include receiving, by a processor, one or more prompts in natural language specifying at least one UI requirement from a user. The method may further include determining, by the processor, an intermediate representation by prompting a fine-tuned Large Language Model (LLM) using the one or more user prompts. The method may further include determining, by the processor, a set of instructions corresponding to the intermediate representation. The method may further include rendering, by the processor, the set of instructions into the UI layout using a rendering engine. The method may further include generating, by the processor, the UI layout based on rendering of the set of instructions. The set of instructions may be displayed in a preview section of the UI layout corresponding to the set of instructions to allow the user to evaluate the generation of the UI layout in real-time.
[0006] In another embodiment, a system for generating a user interface (UI) layout is disclosed. The system may include a processor and a memory communicably coupled to the processor. The memory stores processor-executable instructions, which when executed by the processor, cause the processor to receive one or more prompts in natural language specifying at least one UI requirement. The processor may be further configured to determine an intermediate representation by prompting a fine-tuned Large Language Model (LLM) using the one or more user prompts. The processor may be further configured to determine a set of instructions corresponding to the intermediate representation. The processor may be further configured to render the set of instructions into a UI layout using at least one of a rendering engine. The processor may be further configured to prompt a Large Language Model (LLM) using the tokenized prompt to determine at least one tokenized reference corresponding to at least one tokenized dynamic value corresponding to the at least one dynamic value from the tokenized load testing script. The processor may be further configured to generate the UI layout based on rendering of the UI set of instructions. The set of instructions may be displayed in a preview section of the UI layout corresponding to the set of instructions to allow the user to evaluate the generation of the UI layout in real-time.
[0007] It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
BRIEF DESCRIPTION OF THE DRAWINGS
[0008] The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, serve to explain the disclosed principles.
[0009] FIG. 1 illustrates a block diagram of a system for generating user interface (UI) layout, in accordance with an embodiment of the present disclosure.
[0010] FIG. 2 illustrates another block diagram of the system of FIG.1, in accordance with an embodiment of the present disclosure.
[0011] FIG. 3 illustrates a functional block diagram of a computing device, in accordance with an embodiment of the present disclosure.
[0012] FIG. 4 illustrates a user interface (UI) layout formed by one or more prompts, in accordance with an embodiment of the present disclosure.
[0013] FIG. 5 illustrates a flowchart of a method of generating the UI layout, in accordance with an embodiment of the present disclosure.
[0014] FIG. 6 illustrates a flowchart of a method of rendering a set of instructions, in accordance with an embodiment of the present disclosure.
DETAILED DESCRIPTION OF THE DRAWINGS
[0015] Exemplary embodiments are described with reference to the accompanying drawings. Wherever convenient, the same reference numbers are used throughout the drawings to refer to the same or like parts. While examples and features of disclosed principles are described herein, modifications, adaptations, and other implementations are possible without departing from the scope of the disclosed embodiments. It is intended that the following detailed description be considered as exemplary only, with the true scope being indicated by the following claims. Additional illustrative embodiments are listed.
[0016] Further, the phrases “in some embodiments”, “in accordance with some embodiments”, “in the embodiments shown”, “in other embodiments”, and the like, mean a particular feature, structure, or characteristic following the phrase is included in at least one embodiment of the present disclosure and may be included in more than one embodiment. In addition, such phrases do not necessarily refer to the same embodiments or different embodiments. It is intended that the following detailed description be considered exemplary only, with the true scope and spirit being indicated by the following claims.
[0017] As explained earlier, the composition of the UI layout relied on manual intervention. Although advancements have been made in front-end tools, the creation of the UI layout often necessitates multiple iterative processes informed by the manual feedback provided by focus groups and designers. Such tools contributed to a fragmented ecosystem, resulting in interoperability challenges, particularly when focus groups utilize or favor disparate tools, thereby causing inefficiencies and communication issues. Notwithstanding the progress in frontend frameworks, the requirement for manual intervention renders the generation of the UI layout a multifaceted process. This requirement frequently results in augmented time and effort, especially for users with less experience. The trial-and-error nature inherent in manual feedback rendering can generate inefficiencies, thereby making the process more protracted and less reliable.
[0018] Accordingly, the present disclosure provides a method and system for generating user interface (UI) layout.
[0019] Referring now to FIG. 1, a block diagram of a system 100 of generating an user interface (UI) layout is illustrated, in accordance with an embodiment of the current disclosure. The system 100 may include a computing device 102, an external device 112, and a data server 114 communicably coupled to each other through a wired or wireless communication network 110. The computing device 102 may include a processor 104, a memory 106, and an input/output (I/O) device 108. It is to be noted that, the system 100 may be a UI-generation tool, an application, an IDE, form, and the like accessible by a user to generate a user interface (UI) layout.
[0020] In an embodiment, examples of processor(s) 104 may include but are not limited to, a complex instruction set computer (CISC), a reduced instruction set computer (RISC), a very long instruction word (VILW), a General-purpose Processors (GPPs), microcontrollers (MCUs), a Graphics Processing Units (GPUs), Application-Specific Integrated Circuits (ASICs), a Ultra-Low-Power (IoT/Embedded), and the like, on a chip processors or other future processors.
[0021] In an embodiment, the memory 106 may store instructions that, when executed by the processor 104, and cause the processor 104 to generate the UI layout by using a plurality of large learning models (LLMs), as will be discussed in greater details herein below. In an embodiment, the memory 106 may be a non-volatile memory or a volatile memory. Examples of non-volatile memory may include but are not limited to, a flash memory, a Read Only Memory (ROM), a Programmable ROM (PROM), Erasable PROM (EPROM), and Electrically EPROM (EEPROM) memory. Further, examples of volatile memory may include but are not limited to, Dynamic Random Access Memory (DRAM), and Static Random-Access memory (SRAM).
[0022] In an embodiment, the I/O device 108 may include a variety of interface(s), for example, interfaces for data input and output devices, and the like. The I/O device 108 may facilitate inputting of instructions by a user communicating with the computing device 102. For example, the user may input one or more prompts defining a framework for the UI layout. In an embodiment, the I/O device 108 may be wirelessly connected to the computing device 102 through wireless network interfaces such as Bluetooth®, infrared, or any other wireless radio communication known in the art. In an embodiment, the I/O device 108 may be connected to a communication pathway for one or more components of the computing device 102 to facilitate the transmission of input instructions and output results of data generated by various components such as, but not limited to, processor(s) 104 and memory 106.
[0023] In an embodiment, the data server 114 may be enabled in a remote cloud server or a co-located server and may include a database to store an application, a large language model (LLM) and other data necessary for the system 100 to generate the UI layout. In an embodiment, the data server 114 may store data input by an external device 112 (e.g., the one or more prompts) or output generated by the computing device 102 (e.g., the UI layout). It is to be noted that the application may be designed and implemented as either a web application or a software application. The web application may be developed using a variety of technologies such as HTML, CSS, JavaScript, and various web frameworks like React, Angular, or Vue.js. It may be hosted on a web server and accessible through standard web browsers. On the other hand, the software application may be a standalone program installed on users' devices, which may be developed using programming languages such as Java, C++, Python, or any other suitable language depending on the platform. In an embodiment, the computing device 102 may be communicably coupled with the data server 114 through the communication network 110.
[0024] In an embodiment, the communication network 110 may be a wired or a wireless network or a combination thereof. The communication network 110 can be implemented as one of the different types of networks, such as but not limited to, ethernet IP network, intranet, local area network (LAN), wide area network (WAN), the internet, Wi-Fi, LTE network, CDMA network, 5G and the like. Further, the communication network 110 can either be a dedicated network or a shared network. The shared network represents an association of the different types of networks that use a variety of protocols, for example, Hypertext Transfer Protocol (HTTP), Transmission Control Protocol/Internet Protocol (TCP/IP), Wireless Application Protocol (WAP), and the like, to communicate with one another. Further the communication network 110 can include a variety of network devices, including routers, bridges, servers, computing devices, storage devices, and the like.
[0025] In an embodiment, the computing device 102 may receive a user input of the one or more prompts for generating the UI layout from the external device 112 through the communication network 110. In an embodiment, the computing device 102 and the external device 112 may be a computing system, including but not limited to, a smart phone, a laptop computer, a desktop computer, a notebook, a workstation, a server, a portable computer, a handheld, or a mobile device. In an embodiment, the computing device 102 may be, but not limited to, in-built into the external device 112 or may be a standalone computing device.
[0026] Referring now to FIG. 2, another block diagram 200 of FIG. 1 is illustrated, in accordance with an embodiment of the current disclosure. FIG. 2 is explained in conjunction with FIG. 1. The computing device 102 within the system 100 may be configured to generate a UI layout based on one or more prompts 202 received as the input from the I/O device 108. Further, based on the one or more prompts 202, the computing device 102 may execute a machine learning (ML) model 204 and a transformation model 206 to generate the UI layout 214. The ML model 204 may include a fine-tuned model 208, an intermediate representation model 212 and a rendering model 212 interconnected by the communication network 110. Each model may be executed by the processor 104 to perform various processes in order to generate the UI layout 214 by using the plurality of Large Learning Models (LLMs). Each module may be executed by various modules of the computing device 102. This is explained in greater details in conjunction with FIG. 3.
[0027] By way of an example, the one or more prompts 202 may be received from the user as an input from the I/O device 108. The one or more prompts 202 may include a natural language specifying at least one UI requirement. The one or more prompts 202 may be a predefined structure, a predefined syntax, and/or a predefined wording. The predefined structure, the predefined syntax, and/or the predefined wording may be determined to be useful for the ML model 204. Further, the one or more prompts 202 may define one or more design attributes. The one or more design attributes may include the layout structure, the plurality of components, a plurality of styling parameters, and an interaction logic. The design guidelines may be a set of regulator compliances, a set of design parameters, a set of principles of an organization, and the like. In an embodiment, the one or more prompts 202 may be processed to effectively generate the UI layout 214 without manual intervention.
[0028] In an example, the one or more prompts 202 may be parsed to extract the one or more design attributes by using a plurality of Large Learning Models (LLMs) included in the ML model 204. Further, the plurality of LLMs may be processed by the fine-tuned model 208 based on the one or more prompts 202. The fine-tuned LLM may be determined by the fine-tuned model 208 to determine the intermediate representation corresponding to the one or more design attributes specified in the one or more prompts 202. Further, the fine-tuned LLM may be trained on historical UI layout data, design guidelines, and user preference patterns. Examples of the LLM may include, but are not limited to, zephyr, code LLAMA, GPT, etc. The plurality of LLMs may be fine-tuned by the fine-tuned model 208 to enhance the efficiency of the one or more design attributes.
[0029] Further, the intermediate representation may be determined by the IR model 210 when the fine-tuned Large Language Model (LLM) may be prompted. In an embodiment, the intermediate representation may include at least one of a structured markup language and domain-specific UI description syntax encoding the one or more design attributes such as layout data, design guidelines, and the like. By way of example, the intermediate representation may be Mermai.js, PlantUML, and the like. The structured markup language may be a markup language designed to define and organize data in a standardized, machine-readable format. Further, the domain-specific UI description syntax may include a specialized language or notation tailored for defining UIs within a specific application domain such as Interaction Flow Modeling Language (IFML), MARIA XML, and the like.
[0030] In an embodiment, a set of instructions corresponding to the intermediate representation may be determined by the IR model 210. The set of instructions may be determined when the IR model 210 may map the one or more design attributes specified in the intermediate representation to corresponding frontend framework components selected from a group which may include HTML, XML, CSS, VueJS, JavaScript, React, and Angular. The IR model 210 may utilize a combination of learned transformation functions, rule-based templates, or hybrid logic which may be facilitated through an application of a trained neural network, symbolic interpreter, or domain-specific compiler, which interprets nodes, relationships, or the tokens of the tokenization script in the intermediate representation. Therefore, the set of instructions may be generated in a format suitable for execution within a defined runtime or the processor 104.
[0031] In an embodiment, the set of instructions may be rendered by the rendering model 212 into the UI layout 214. The set of instructions may be interpreted by the rendering model 212 through a parsing mechanism which extracts the plurality of styling parameters, the layout structure of the plurality of components and the interaction logic between the plurality of components defined by the one or more design attributes. Based on the parsed data, the rendering model 212 may generate a layout tree or a comparable intermediate visual structure which may represent the spatial and logical organization of the plurality of components. Further, the rendering model 212 may subsequently infer UI usability parameters based on the UI layout data such as width, height, padding, and alignment by resolving both static definitions and dynamic conditions, including screen dimensions, user-device orientation, and user context. The UI usability parameters may be configured to instantiate and position the plurality of components within a rendering context, such as a canvas, browser viewport, or native application window.
[0032] Additionally and/or alternatively, the cursor movement, the click patterns, and the time-on-element metrics may be interpreted by the rendering model 212 to enable interactive functionality. This approach allows the rendering model 212 to produce a coherent and responsive UI layout from abstract instructions in a deterministic and reproducible manner.
[0033] In an embodiment, the UI layout 214 may be generated by the transformation model 206 based on rendering of the set of instructions. Moreover, the set of instructions (rendered) may be displayed in a preview section of the UI layout 214 to allow the user to evaluate the generation of the UI layout 214 in real-time. For example, the transformation model 206 may parse the rendered set of instructions and may construct a layout hierarchy, the position of plurality of components, dimensioning of each component, the plurality of styling attributes, and the like based on the one or more design attributes.
[0034] In an embodiment, the set of instructions may be updated in real-time based on a contextual feedback received from a user interaction with the set of instructions by the transformation model 206. By way of example, the contextual feedback may include at least one explicit feedback and implicit feedback. The contextual feedback may include a behavioral interaction data generated during the user interaction and the interaction logic. Moreover, the updating the set of instructions may include the processing the behavioral interaction data using the machine learning model 204 trained to infer UI usability parameters based on cursor movement, click patterns, and time-on-element metrics.
[0035] FIG. 3 illustrates a functional block diagram 300 of the computing device 102, in accordance with an embodiment of the present disclosure. FIG. 3 is explained in conjunction with FIG. 1 and FIG. 2. In an embodiment, the computing device 102 may include a receiving module 302, a processing module 304, a hyperparameter tuning module 306, an intermediate representation (IR) module 308, an instruction determination module 310, a rendering module 312, a generation module 314, and a feedback module 316.
[0036] The receiving module 302 may receive one or more prompts 202 in natural language specifying at least one UI requirement from the user. In an embodiment, the one or more prompts 202 may be human generated and fed into the I/O device 108. The UI layout 214 may be an arrangement and design of a plurality of components on a screen or interface to ensure a seamless and intuitive user experience. For example, one or more prompts 202 may be provided as input using a input interface of the I/O device 108 such as a UI-generation tool, the application, the IDE, the form, and the like accessible by the user to generate the UI layout 214. The UI layout 214 may include a plurality of components and arrangement thereof such as a header, a main content area, a sidebar, a footer, navigation elements, a plurality of interactive elements, color scheme, icons and graphics, and the like.
[0037] By way of example, the one or more prompts 202 may be structured or unstructured input data which may describe the one or more design attributes. The one or more design attributes may include the layout structure, the plurality of components, the plurality of styling parameters, and the interaction logic. The layout structure may be an organized arrangement of the plurality of components. Further, the plurality of styling parameters may refer to the specific attributes applied to each component to define a visual appearance of each component and behavior with the user. For example, the color scheme, typography, spacing and layout, interactive states, and the like. Furthermore, the interaction logic may refer to cursor movement, click patterns, and time-on-element metrics.
[0038] Each component may include a UI layout data, design guidelines, and user preference patterns. The UI layout data may represent a spatial arrangement, sizing and responsiveness, an event handling and interactivity, styling and theming, and the like of the plurality of components. The design guidelines may be a set of regulatory compliances such as WCAG guidelines, a set of design parameters, a set of principles of an organization, and the like. Further, the plurality of user preference patterns may include settings and preferences, adaptive interfaces, behavioral patterns, progressive disclosure, and the like. Further, the plurality of user preference patterns may refer to the diverse ways in which the user may customize and interact with the plurality of components to align with needs, behaviors, and preferences of the organization.
[0039] Further, the receiving module 302 may process the or more prompts 202 for generating the UI layout 214 based on the at least one user requirement by using a plurality of LLMs. In accordance with the exemplary embodiment, the receiving module 302 may receive a prompt such as “Design a user interface layout for a [type of application, e.g., SaaS dashboard, mobile banking app, developer portal] tailored for [target audience, e.g., enterprise users, tech-savvy professionals, students]. The layout should embody the following design attributes: “Purpose: [e.g., data visualization, task management, real-time collaboration]” “Style: [e.g., modern, minimalist, dark mode, corporate]”, “User Needs: [e.g., quick access to information, intuitive navigation, customizable settings]”, “Accessibility Considerations: [e.g., high contrast colors, screen reader compatibility, keyboard navigation]”, Provide a wireframe layout with labeled components, including: “Header with [e.g., logo, user profile, notifications]”, “Main content area with [e.g., data tables, charts, forms]”, “Call-to-action buttons such as [e.g., 'Sign Up', 'Submit', 'Learn More']”, “Notifications or alerts for [e.g., system updates, user messages]”, Any other relevant UI elements specific to [e.g., your organization's needs, industry standards]” to generate the UI layout 214.
[0040] Further, the one or more prompts 202 may be processed through a preprocessing technique. The preprocessing technique may be selected based on the layout data, the design attributes such as the WCAG guidelines, the set of principles, and the like. The preprocessing technique may include extracting the UI layout data, design guidelines, user preference patterns, and the like from the one or more prompts in a predefined pattern (i.e., a predefined format). Further, the at least one requirement may be derived from the one or more prompts 202 in the predefined pattern (i.e., a predefined format). The predefined pattern allows the receiving module 202 to ensure that the at least one requirement along with the one or more design attributes are available for the processing.
[0041] Further, the processing module 304 may be configured to determine a standalone Large Language Model (LLM) from the plurality of LLMs available in a data storage in the memory 106. By way of example, the processing module 304 may execute the fine-tuned model 206 to fine-tune the plurality of LLMs based on one or more prompts 202 by using a fine-tuning dataset to determine the standalone fine-tuned LLM. As will be appreciated, a standalone fine-tuned LLM is preferable due to factors such as WCAG guidelines, the set of principles, and the like. The standalone fine-tuned LLM may be select test steps based on the at least one requirement, resource usage, and hardware availability.
[0042] By way of example, the plurality of LLMs may be selected from an Open Source and a Proprietary based Source. It is to be noted that, the standalone fine-tuned LLM is trained on historical UI layout data, design guidelines, and user preference patterns which are based on the set of principles of an organization. For example, the set of principles may include a set of colors trademarked by the organization, a set of fonts trademarked, a set of abbreviations trademarked, and the like. Furthermore, the set of principles may be specified in the one or more prompts 202. As the UI layout 214 may be required to be generated based on the set of principles of the organization, therefore, the need to generate the UI layout 214 from scratch may be eliminated.
[0043] Upon determination of the standalone fine-tuned LLM, the hyperparameter tuning module 306 may be configured to optimize the fine-tuning process. The hyperparameter tuning module 306 may be configured to determine the UI requirements as per the UI layout data, design guidelines (e.g. the WCAG guidelines and the set of principles), and the plurality of user preference patterns. In other words, the hyperparameter tuning module 306 may optimize the performance of the fine-tuned model 206 by systematically adjusting a predefined set of hyperparameters that govern the training behavior of the standalone fine-tuned LLM.
[0044] Further, the IR module 308 may determine an intermediate representation by prompting the fine-tuned LLM using the one or more prompts 202. The intermediate representation may include at least one of a structured markup language and domain-specific UI description syntax encoding one or more design attributes. The one or more design attributes may include the layout structure, the plurality of components, the plurality of styling parameters, and the interaction logic. By way of example, the intermediate representation may be the lexicographically transformable form based on fine-tuning one or more prompts 202. By way of example, the intermediate representation may be a mermaid.js, and the like. Further, the lexicographically transformable form may be a pseudo-code, a pictorial representation, and the like which may be as the set of principles, the WCAG guidelines and the like.
[0045] By way of example, the intermediate representation may include the color and the front trademark of the organization, the layout of the header, navigation elements, color scheme, footer, and alignment thereof. For example, to ensure the layout configuration of one of the button from the plurality of components of the UI layout, the “componentType” may be represented as “Click-on-actionButton”, “text” may be represented as “Submit/Sign UP/Learn More”, the “background color” may be represented as “#4285F4”, and “onClick” may be represented as “submitForm()’.
[0046] Upon determining the intermediate representation, the instruction determination module 310 may determine a set of instructions corresponding to the intermediate representation. The set of instructions may be determined by mapping the one or more design attributes specified in the intermediate representation to corresponding frontend framework components selected from a group which may include HTML, XML, CSS, VueJS, JavaScript, React, and Angular. Such mapping allows the system 100 to flexibly generate deployable set of instructions across different frontend ecosystems while maintaining consistency with the original design intent of the user. For example, the set of instructions specified in the intermediate representation may be compatible with the group and may include “Submit”.
[0047] Upon determining the set of instructions, the rendering module 312 may invoke with the rendering model 212 to render the set of instructions into the UI layout 214. The rendering module 312 may be a rendering engine which may interpret the set of instructions defining the layout structure of the plurality of components, the plurality of styling parameters, the interaction logic between the plurality of components, and the like. Therefore, the rendering module 212 may render a visual representation i.e., the UI layout which may be rendered on a preview section. The rendering module 312 may facilitate this process by passing the set of instructions set to the rendering model 212 which may be managing rendering contexts (e.g., viewport dimensions or device constraints), and handling any dynamic updates required for real-time preview or interactivity.
[0048] The UI generation module 314 may receive the rendered set of instructions and execute the transformation model 206 to generate the UI layout 214. The UI generation module 314may be a rule-based engine, a template-driven renderer, or a machine learning-based layout synthesizer configured to interpret the set of instructions and assemble the UI layout 214. By way of example, the UI generation module 314 may utilize pre-trained models and datasets to comprehend the one or more design attributes and user preferences. In an exemplary embodiment, the UI generation module 314 may be an On-Premises Support Module, which addresses the demands of the organization unique security or regulatory requirements, thereby enables the implementation of the UI layout generation service within the organization infrastructure, ensuring data management.
[0049] By way of example, the set of instructions may be displayed in the preview section of the UI layout corresponding to the set of instructions to allow the user to evaluate the generation of the UI layout in a real-time. By way of example, the preview section may be dynamically updated to reflect the generated layout in real-time, thereby allowing the user to evaluate the structure, styling, and functionality of the UI elements before final deployment or export. For instance, the set of instructions may be modified either manually by the user or automatically by the plurality of the LLMs of the system 100 to simultaneously update to mirror the changes, facilitating an interactive and iterative design experience. This real-time feedback mechanism enhances usability by enabling immediate visual validation of the transformation results. This is explained in greater details hereinafter.
[0050] In an embodiment, the set of instructions may be updated in real-time based on a contextual feedback received from a user interaction with the rendered set of instructions. The contextual feedback loops may show immediate previews of the UI layout, allowing for repeated tweaks and refinements until the desired result may be reached. By way of example, the contextual feedback may include at least one explicit feedback and implicit feedback. The explicit feedback may be a feedback provided by the user and the implicit feedback may be feedback provided by the plurality of LLMs. The contextual feedback may include a behavior interaction data. Further, the set of instructions may be updated by processing the behavioral interaction data using the rendering model of the machine learning model trained to infer UI usability parameters based on cursor movement, click patterns, and time-on-element metrics.
[0051] By way of example, the behavioral interaction data may be utilized by the feedback module 316 to automatically render personalized layouts based on real-time user engagement on the preview section. The feedback module 316 may monitor the cursor movement and the click distribution patterns across the interface elements during the UI layout generation process. If the feedback module 316 in conjunction with the rendering module 312 detects that the user consistently hovers over specific widgets but hesitate to manually add them to their dashboards, and concurrently identifies high click frequency on related metrics, the machine learning model interprets such patterns to infer user intent and relevance. Based on such inferences, the rendering module 312 may dynamically generate and arrange the UI layout that pre-populates the most relevant components, placing them in high-visibility zones within a grid-based interface. This approach minimizes manual configuration, improves discoverability of relevant tools, and adapts the interface structure in response to individualized usage patterns, thereby enhancing overall usability and task efficiency. Further, the behavioral interaction data generated from the one or more design attributes may be stored in the database for further analysis and utilization.
[0052] It should be noted that all such aforementioned modules 302–316 may be represented as a single module or a combination of different modules. Further, as will be appreciated by those skilled in the art, each of the modules 302-316 may reside, in whole or in parts, on one device or multiple devices in communication with each other. In some embodiments, each of the modules 302-316 may be implemented as dedicated hardware circuit comprising custom application-specific integrated circuit (ASIC) or gate arrays, off-the-shelf semiconductors such as logic chips, transistors, or other discrete components. Each of the modules 302-316 may also be implemented in a programmable hardware device such as a field programmable gate array (FPGA), programmable array logic, programmable logic device, and so forth. Alternatively, each of the modules 302-316 may be implemented in software for execution by various types of processors (e.g., processor 104). An identified module of executable code may, for instance, include one or more physical or logical blocks of computer instructions, which may, for instance, be organized as an object, procedure, function, or other construct. Nevertheless, the executables of an identified module or component need not be physically located together but may include disparate instructions stored in different locations which, when joined logically together, include the module and achieve the stated purpose of the module. Indeed, a module of executable code could be a single instruction, or many instructions, and may even be distributed over several different code segments, among different applications, and across several memory devices.
[0053] As will be appreciated by one skilled in the art, a variety of processes may be employed for optimizing resource utilization in cloud-based data processing platforms. For example, the exemplary system 100 and the associated processor 104 may determine reference to dynamic values for performing load testing by the processes discussed herein. In particular, as will be appreciated by those of ordinary skill in the art, control logic and/or automated routines for performing the techniques and steps described herein may be implemented by the system 100 and the associated computing device 102 either by hardware, software, or combinations of hardware and software. For example, suitable code may be accessed and executed by the one or more processors on the system 100 to perform some or all of the techniques described herein. Similarly, application specific integrated circuits (ASICs) configured to perform some, or all of the processes described herein may be included in the one or more processors on the system 100.
[0054] Referring now to FIG. 4, the UI layout 400 formed by the one or more prompts 202 is illustrated, in accordance with an embodiment of the present disclosure. FIG. 4 is explained in conjunction with FIG. 3. The UI layout 400 may correspond to the UI layout 214, and hereinafter referred to as the UI layout 400. The UI layout 400 is designed to interact with the user and configured to enable changes based on the one or more attributes. The UI layout 400 as shown in FIG. 4, depicts a display section 402, a feedback input section 403, and a real-time preview section 404 generated by the computing device 102.
[0055] The display section 402 illustrates the generated UI layout. The UI layout may be designed to provide a clean, intuitive, and responsive experience, facilitating seamless interaction with the user. The display section 402 may list the header, the main content area, the sidebar, the footer, the plurality of navigation elements, the plurality of interactive elements, the plurality of color schemes, the plurality of icons and graphics, and the like. Further, the display section 402 may be generated based on the one or more prompts 202 fed as input by the user. Further, the display section 402 may be based on the WCAG guidelines and the set of principles of the organization. The display section 402 may user interactive such as a user with hearing problems may be capable of interacting therewith.
[0056] The display section 402 may define a first set of parallel boundaries 406, 408 and a second set of parallel boundaries 410, 412 disposed perpendicularly to the first set of parallel boundaries 406, 408. The first set of parallel boundaries 406, 408 and the second set of parallel boundaries 410, 412 may be dynamically formed and may be based on the dimensions of the computing system for example, the PC, laptop, mobile device, and the like. The display section 402 may be formed by a unique and dynamic arrangement of a plurality of components 414 which satisfy the one or more design attributes fed as input in form of the one or more prompts 202 in the I/O device 108.
[0057] Further, the preview section 404 may be displayed within the display section 402 and aligned perpendicularly to the first set of parallel boundaries 406, 408 and the second set of parallel boundaries 410, 412. The preview section 404 may be draggable and may appear with junction of the first set of parallel boundaries 406, 408 and the second set of parallel boundaries 410, 412. Further, the preview section 404 may include the set of instructions generated based on the processing of the one or more prompts 202 by the processor 104 and the plurality of LLMs. The set of instructions may be generated based on the encoding of the one or more design attributes. Further, the preview section 404 may be user-interactive and configured to enable changes in the one or more design attributes. Thereby allowing the user or developer to evaluate the generated display section 402.
[0058] In an exemplary embodiment, the user may evaluate the set of principles, the WCAG guidelines such as the header, the main content area, the sidebar, the footer, the plurality of navigation elements, the plurality of interactive elements, the plurality of color schemes, the plurality of icons and graphics, and the like may provide a set of instructions to implement changes to meet a desired requirement of the UI layout via the feedback input section 403. The set of instructions may be analyzed and implemented in the preview section 404 to enable the changes in the display section 402. Based on evaluation, the changes in the preview section 404 may be verified and implemented.
[0059] It should be noted that, the changes may be made by the user and the fine-tuned LLM. The LLM may be accessible via the application, IDE, form and the like for enabling changes which may be apparent from the preview section 402 via the network 110. In an embodiment, the LLM may be pretrained or fine-tuned for determining the intermediate representation from the tokenized script. The tokenized script may be determined from the tokenizer.
[0060] Referring now to FIG. 5, a flowchart 500 of a method of generating the UI layout is illustrated, in accordance with an embodiment of the present disclosure. FIG. 5 is explained in conjunction with FIGs. 1, 2 and 3. In an embodiment, the method may include a plurality of steps. Each step of the flowchart 500 may be executed by various modules, same as the modules of the computing device 102 so as to generate the UI layout by using the plurality of LLMs.
[0061] At step 502, the computing device 102 may receive the one or more prompts 202 in natural language specifying at least one UI requirement from the user. In an embodiment, the one or more prompts 202 may be human-generated and fed into the I/O device 108. The one or more prompts 202 may be structured or unstructured input data which may describe the user interface (UI) requirements, component specifications, design constraints, and the like. For example, the user may fed the one or more design attributes which may be adhered to the WCAG guidelines and may include the set of principles, dimensions of the plurality of components of the UI layout, and the like.
[0062] Further, at step 504, the computing device 102 may further determine the intermediate representation by prompting the fine-tuned LLM using the one or more user prompts 202. . Further, based on the one or more prompts 202, the dimensions of each component from the plurality of components, the color scheme associated with each component, the navigation elements, and the like may be selected and processed and displayed in the intermediate representation.
[0063] Thus, at step 506, the computing device 102 may determine the set of instructions corresponding to the intermediate representation. The set of instructions may be determined by mapping the one or more design attributes specified in the intermediate representation to corresponding frontend framework components selected from the group which may include HTML, XML, CSS, VueJS, JavaScript, React, and Angular.
[0064] Further, at step 508, the computing device 102 may further render the set of instructions into the UI layout. Further, the set of instructions, which may define component hierarchies, layout constraints, style attributes, and interaction logic, and produce the visual representation that may be rendered on the preview section.
[0065] Further, at step 510, the computing device 102 may further receive the rendered set of instructions to generate the UI layout. By way of example, the set of instructions may be displayed in a preview section of the UI layout corresponding to the set of instructions to allow the user to evaluate the generation of the UI layout in a real-time. The set of instructions may include structured data formatted in the markup or scripting language, such as HTML, CSS, JavaScript, React JSX, or VueJS templates, and may represent frontend components, layout logic, and styling rules.
[0066] Further, at step 512, the computing device 102 may further update the set of instructions which are rendered in the real-time based on the contextual feedback received from the user interaction. Further, the set of instructions may be updated by processing the behavioral interaction data using the machine learning model trained to infer UI usability parameters based on cursor movement, click patterns, and time-on-element metrics. The contextual feedback may occur by the user or may occur by the fine-tuned LLM. By way of example, the contextual feedback may be performed by the behavior interaction data such as cursor movement, click patterns, and time-on-element metrics.
[0067] Referring now to FIG. 6, a flowchart 600 of a method of rendering a set of instructions is illustrated, in accordance with an embodiment of the present disclosure. FIG. 6 is explained in conjunction with FIG. 5. In an embodiment, the method may include a plurality of steps. Each step of the flowchart 600 may be executed by various modules, same as the modules of the computing device 102 so as to generate the UI layout by using the plurality of LLMs.
[0068] At step 602, the user may initiate the process of generating the UI layout by processing one or more prompts 202 by the computing device 102. The one or more prompts 202 may represent the at least one requirement based on the UI layout data, design guidelines, user preference patterns, such as the WCAG guidelines, the set of principles, and the like. Further, at step 604, the one or more prompts 202 may be processed through a preprocessing technique. The preprocessing technique may be selected based on the UI layout data, design guidelines, user preference patterns, and the like, such as the set of WCAG guidelines and the set of principles of the organizations. The at least one requirement may be derived from the one or more prompts 202 in the predefined pattern by the plurality of LLMs.
[0069] At step 606, upon processing the one or more prompts 202, the computing device 102 may determine the at least one requirement as per the UI layout data, the design guidelines, and the plurality of user preference patterns. The computing device 102 may perform hyperparameter tuning of the plurality of LLMs to optimize the performance of the fine-tuned model by systematically adjusting a predefined set of hyperparameters that govern the training behavior of the fine-tuned LLM.
[0070] Further, at step 608, the fine-tuned LLM may be evaluated by the computing device 102 such that the intermediate representation may be determined corresponding to the set of instructions. The intermediate representation may be the at least one of a structured markup language and domain-specific UI description syntax encoding one or more design attributes.
[0071] At step 610, the computing device 102 may evaluate a decision made by the fine-tuned LLM. Additionally, the computing device 102 may determine the intermediate representation and the set of instructions corresponding to the intermediate representation. Accordingly, the computing device 102 may map the one or more design attributes specified in the intermediate representation to corresponding frontend framework components selected from a group which may include HTML, XML, CSS, VueJS, JavaScript, React, and Angular. If the mapping is satisfied and corresponds to the set of principles and one or more design attributes as specified in the one or more prompts 202, the set of instructions may be processed for rendering thereof by the computing device 102 at step 612. If the mapping is unsatisfied and may not correspond to the set of principles and one or more design attributes as specified in the one or more prompts 202, the set of instructions may be processed again by the computing device 102 as in the step 606 or the explicit feedback may be executed by the user.
[0072] Further, at step 612, the computing device 102 may be configured to render the set of instructions to generate the UI layout 400 and the preview section 404 corresponding to the UI layout. The preview section 404 may include the set of instructions which may be accessible to the user or may be user-interactive such that the one or more attributes may be satisfied. At step 614, if the UI layout 400 may satisfy one or more design attributes, the set of principles, and the like, the UI layout 400 may be stored in the database and may be trained by the plurality of LLMs. Furthermore, if unsatisfied, the contextual feedback may be provided at the step 618 such that the one or more design attributes may be changed by the user. The one or more design attributes may be changed to satisfy the needs of the organization and ensure user-customization. The change in the one or more design attributes may be performed from the preview section 404. By way of example, the contextual feedback may be the implicit feedback and the explicit feedback.
[0073] As will be appreciated by those skilled in the art, the techniques described in the various embodiments discussed above are not routine, or conventional, or well-understood in the art. The techniques discussed above provide for generating the UI layout. In light of the above-mentioned advantages and the technical advancements provided by the disclosed method and system, the claimed steps as discussed above are not routine, conventional, or well understood in the art, as the claimed steps enable the following solutions to the existing problems in conventional technologies. Further, the claimed steps bring an improvement in the functioning of the device itself as the claimed steps provide a technical solution to a technical problem.
[0074] Thus, the disclosed method and system try to overcome the technical problem of generating the UI layout with utilization of manual intervention. In an embodiment, advantages of the disclosed method and system may include but is not limited to enhanced accuracy in enhancing a user-centric customization of the UI layout in real time, automating the iteration process of the updating, adaptability and flexibility, and continuous improvement through feedback.
[0075] The disclosed system and method, enables automation in the UI layout generation process by allowing for customization and fine-tuning. The user provides specific design preferences and requirements, and the system incorporates into the UI layout. This ensures that the UI layout aligns with the branding and visual identity of the organization. Thus, offers a streamlined and efficient solution for building interactive and responsive web pages within a tight deadline. Further, the present method and the system leverages machine learning, continuous feedback, and cloud-based rendering to expedite the UI layout development process while maintaining high-quality standards. In other words, t disclosed method and system adhere to the WCAG guidelines and set of principles of the organization such that the consistent industry standards may be met. The styling parameters and the UI layout data may be hierarchically adjusted based on the behavior interaction.
[0076] The disclosed method and system utilize a Large Language Model (LLM) to accurately generate the set of instructions corresponding to the one or more prompts 202 such that the desired the UI layout may be generated. This reduces the likelihood of errors and increases the precision of UI layout generation. Thereby, significantly reduce the manual effort required by automating the feedbacks with utilization of the fine-tuned LLMs such that the manual effort is reduced and accuracy in every step of the process may be maintained.
[0077] The specification has described method and system for generating the UI layout. The illustrated steps are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0078] Furthermore, one or more computer-readable storage media may be utilized in implementing embodiments consistent with the present disclosure. A computer-readable storage medium refers to any type of physical memory on which information or data readable by a processor may be stored. Thus, a computer-readable storage medium may store instructions for execution by one or more processors, including instructions for causing the processor(s) to perform steps or stages consistent with the embodiments described herein. The term “computer-readable medium” should be understood to include tangible items and exclude carrier waves and transient signals, i.e., be non-transitory. Examples include random access memory (RAM), read-only memory (ROM), volatile memory, nonvolatile memory, hard drives, CD ROMs, DVDs, flash drives, disks, and any other known physical storage media.
[0079] It is intended that the disclosure and examples be considered as exemplary only, with a true scope of disclosed embodiments being indicated by the following claims. , Claims:CLAIMS
I/We Claim:
1. A method of generating a user interface (UI) layout (400), the method comprising:
receiving, by a processor (104), one or more prompts (202) in natural language specifying at least one UI requirement from a user;
determining, by the processor (104), an intermediate representation by prompting a fine-tuned Large Language Model (LLM) using the one or more user prompts;
determining, by the processor (104), a set of instructions corresponding to the intermediate representation;
rendering, by the processor (104), the set of instructions into an UI layout using a rendering engine; and
generating, by the processor (104), the UI layout based on rendering of the set of instructions, wherein the set of instructions is displayed in a preview section of the UI layout corresponding to the set of instructions to allow the user to evaluate the generation of the UI layout in real-time.
2. The method as claimed in claim 1, comprising:
updating, by the processor (104), the set of instructions in real-time based on a contextual feedback received from a user interaction with the rendered set of instructions, wherein the contextual feedback comprises at least one explicit feedback and implicit feedback.
3. The method as claimed in claim 2, wherein the contextual feedback comprises a behavioural interaction data, and wherein updating the set of instructions comprises:
Processing, by the processor (104), the behavioural interaction data using a machine learning model trained to infer UI usability parameters based on cursor movement, click patterns, and time-on-element metrics.
4. The method as claimed in claim 1, comprising:
processing, by the processor (104), the or more prompts (202) for generating the UI layout based on the at least one user requirement by using a plurality of LLMs; and
fine-tuning, by the processor, the plurality of LLMs based on the one or more prompts (202) by the fine-tuned LLM.
5. The method as claimed in claim 4, wherein the fine-tuned LLM is trained on historical UI layout data, design guidelines, and user preference patterns.
6. The method as claimed in claim 1, wherein the intermediate representation comprises:
at least one of a structured markup language and domain-specific UI description syntax encoding one or more design attributes, wherein the one or more design attributes comprises a layout structure, a plurality of components, a plurality of styling parameters, and an interaction logic.
7. The method as claimed in claim 6, wherein determining the set of instructions based on the intermediate representation comprises:
mapping, by the processor (104), the one or more design attributes specified in the intermediate representation to corresponding frontend framework components selected from a group comprising HTML, XML, CSS, VueJS, JavaScript, React, and Angular.
8. A system (100) for generating a user interface (UI) layout (400), the system (104) comprising:
a processor (104); and
a memory (106) coupled to the processor (104), wherein the memory (106) stores processor-executable instructions, which when executed by the processor (104), cause the processor (104) to:
receive one or more prompts (202) in natural language specifying at least one UI requirement;
determine an intermediate representation by prompting a fine-tuned Large Language Model (LLM) using the one or more user prompts (202);
determine a set of instructions corresponding to the intermediate representation;
render the set of instructions into a UI layout using at least one of a rendering engine; and
generate the UI layout based on rendering of the set of instructions, wherein the UI set of instructions in a preview section of the UI layout corresponding to the set of instructions to allow the user to evaluate the generation of the UI layout in real-time.
9. The system (100) as claimed in claim 8, wherein the processor (104) is configured to:
update the set of instructions in real-time based on contextual feedback received from a user interaction with the rendered set of instructions, wherein the contextual feedback comprises at least one explicit feedback and implicit feedback.
10. The system (100) as claimed in claim 9, wherein the contextual feedback comprises a behavioural interaction data, and wherein to update the set of instructions, the processor (104) is configured to:
process the behavioural interaction data using a machine learning model trained to infer UI usability parameters based on cursor movement, click patterns, and time-on-element metrics.
11. The system (100) as claimed in claim 8, wherein the processor (104) is configured to:
process the or more prompts (202) for generating the UI layout based on at least one user requirement by using a plurality of LLMs; and
fine-tune the plurality of LLMs based on the one or more prompts (202) by the fine-tuned LLM.
12. The system (100) as claimed in 10, wherein the fine-tuned LLM is trained on historical UI layout data, design guidelines, and user preference patterns.
13. The system (100) as claimed in claim 8, wherein the intermediate representation comprises:
at least one of a structured markup language and domain-specific UI description syntax encoding one or more design attributes, wherein the one or more design attributes comprises a layout structure, a plurality of components, a plurality of styling parameters, and an interaction logic.
14. The system (100) as claimed in claim 13, wherein to determine the set of instructions based on the intermediate representation, the processor (104) is configured to:
map the one or more design attributes specified in the intermediate representation to corresponding frontend framework components selected from a group comprising HTML, XML, CSS, VueJS, JavaScript, React, and Angular.
15. A user interface (UI) layout (400) formed by one or more prompts (202), the UI layout (400) comprising:
a display section (402) formed based on processing of the one or more prompts (202), wherein the display section (402) defines a first set of parallel boundaries (406, 408), and a second set of parallel boundaries (410, 412) disposed perpendicularly to the first set of parallel boundaries (406, 408); and
a real-time preview section (404) displayed within the display section (402) and aligned perpendicularly to the at least first set of parallel boundaries (406, 408) or a second set of parallel boundaries (410, 412), and
wherein the real-time preview section (404) comprises a set of instructions generated based on the processing of the one or more prompts (202) by a processor (104) and a plurality of LLMs, and
wherein the real-time preview section (404) is user-interactive and configured to enable change in one or more design attributes,
wherein the one or more design attribute comprises a layout structure, a plurality of components, a plurality of styling parameters, and an interaction logic.
| # | Name | Date |
|---|---|---|
| 1 | 202511085200-STATEMENT OF UNDERTAKING (FORM 3) [08-09-2025(online)].pdf | 2025-09-08 |
| 2 | 202511085200-REQUEST FOR EXAMINATION (FORM-18) [08-09-2025(online)].pdf | 2025-09-08 |
| 3 | 202511085200-REQUEST FOR EARLY PUBLICATION(FORM-9) [08-09-2025(online)].pdf | 2025-09-08 |
| 4 | 202511085200-PROOF OF RIGHT [08-09-2025(online)].pdf | 2025-09-08 |
| 5 | 202511085200-POWER OF AUTHORITY [08-09-2025(online)].pdf | 2025-09-08 |
| 6 | 202511085200-FORM-9 [08-09-2025(online)].pdf | 2025-09-08 |
| 7 | 202511085200-FORM 18 [08-09-2025(online)].pdf | 2025-09-08 |
| 8 | 202511085200-FORM 1 [08-09-2025(online)].pdf | 2025-09-08 |
| 9 | 202511085200-FIGURE OF ABSTRACT [08-09-2025(online)].pdf | 2025-09-08 |
| 10 | 202511085200-DRAWINGS [08-09-2025(online)].pdf | 2025-09-08 |
| 11 | 202511085200-DECLARATION OF INVENTORSHIP (FORM 5) [08-09-2025(online)].pdf | 2025-09-08 |
| 12 | 202511085200-COMPLETE SPECIFICATION [08-09-2025(online)].pdf | 2025-09-08 |