Wipzo Systech Private Limited

10 Job openings at Wipzo Systech Private Limited
Data Catalog and Governance pune,maharashtra,india 0 years None Not disclosed On-site Full Time

-

Project Manager hyderabad,telangana,india 6 - 10 years INR 15.0 - 30.0 Lacs P.A. On-site Full Time

Role: Project Manager (6-Openings) Positions: 6 (Kolkata - 2 and Hyderabad - 4) Experience: 6-10 years in Technology Companies Only. Job_Responsibilities: Drive End to End program management of the initiatives from requirements to delivery for wide range of customers. Responsible to prepare User Stories, Data Flow, Business Requirement Document (BRD), Functional Requirement Specification (FRS) and Use-cases, for every initiative. Construct workflow charts and diagrams in collaboration with Customers and ZINFI Product. Create detailed plans for execution and implementation of new processes taking customer into confidence. Monitor project progress and perform daily, weekly and monthly reviews. Analyses of current processes using operational metrics and reports as mandated by the customer. Communicate with team heads regarding common challenges, roadblocks and other issues that interrupt their workflow Producing detailed costing for customers and ensuring the contract is profitable. Ensuring that the company's product & features can deliver on the customer's requirements. Key Skills and Expertise: Ability to impact operations and effect change without being confrontational Detail oriented, analytical, and inquisitive Ability to work independently and with others Extremely organized with strong time-management skills Excellent communication skills and ability to explain complex issues Excellent Project Management skills (Aha, MSProject, JIRA, ADO,..) Customer Awareness: Ability to understand the customer, their needs, their workflows, their business, potential impact opportunities and their KPIs. Business Awareness: Ability to understand ZINFIs offering, what can be done to solve a business case for a customer with ZINFIs offering. About ZINFI Technologies Inc.: Headquartered in the Silicon Valley area near San Francisco, California, ZINFI Technologies Inc. is a leading global provider of innovative services. Established in 2004, ZINFI operates with a corporate office in California and regional offices in Australia, China, India, Japan, Singapore, the UK, and the US. In India, our offices located in Kolkata and Hyderabad. As a rapidly growing technology company led by an experienced management team, ZINFI offers a unique and dynamic work environment. This is an exceptional opportunity to join an exciting business and grow your career in a thriving global organization. To learn more about us, please visit our website: www.zinfi.com Role: IT Project Manager Industry Type: IT Services & Consulting Department: Project & Program Management Employment Type: Full Time, Permanent Role Category: Technology / IT Education UG: Any Graduate PG: Any Postgraduate Doctorate: Any Doctorate

Senior Python Developer gurgaon,haryana,india 6 - 11 years INR 15.0 - 30.0 Lacs P.A. On-site Full Time

Position - Senior Python Developer Location - Pune/Hyderabad/Gurgaon Experience - 6+ Overall Responsibilities: Collaborate closely with business stakeholders to understand evolving requirements and adapt to changing business needs. Analyze and comprehend business requirements, assess the tech stack, and guide the development team towards an integrated solution approach. Demonstrate a dedicated and committed approach with excellent communication skills, participating actively in agile ceremonies and development processes. Key Responsibilities: Design, develop, and maintain scalable, reusable Python code for various applications. Troubleshoot, debug, and provide production support for Python-based applications. Work with Python-based data structures, ensuring adherence to best practices for efficiency and maintainability. Develop and manage multi-process architecture and handle threading limitations within Python. Ensure application security, scalability, and authorization by incorporating best practices in system design and implementation. Write unit tests to ensure code coverage, maintainability, and high-quality production-ready code. Integrate user-facing elements with server-side logic. Work on API development using FastAPI or similar technologies. Leverage frameworks such as Django and Flask to build robust web applications. Must-Have Skills: 6+ years of professional experience as a Software Engineer, with a strong focus on Python development. Proficiency in Unix, FTP, and file handling operations. Strong experience working with Agile methodology (JIRA, Confluence). Expertise in version control using Git. Extensive experience in Python-based data structures and best practices. Hands-on experience with Object-Oriented Programming (OOP) concepts and limitations. Strong debugging skills and the ability to troubleshoot and provide production support. Proficiency in API development using frameworks like FastAPI or similar technologies. Experience with web frameworks such as Django and Flask. Familiarity with writing unit tests and ensuring code coverage for production-level code. Experience in cross-platform development. Ability to ensure the security, scalability, and authorization setup for applications. Good-to-Have Skills: Familiarity with containerization and cloud-based technologies (e.g., Docker, AWS, or similar platforms). Knowledge of microservices architecture and cloud-native application development. Experience with front-end technologies to better integrate user-facing elements with back-end services. Exposure to CI/CD pipeline setup and automated deployments. Understanding of multi-threading and concurrency models beyond basic Python threading capabilities. Experience in working with large-scale distributed systems or big data platforms. Familiarity with data modeling and database management systems. What we provide: Opportunities to develop and grow as an engineer. We are at the forefront of our industry, always expanding into new areas, and working with open-source and new technologies. A set of hardworking and dedicated peers. Growth and mentorship. We believe in growing engineers through ownership and leadership opportunities. We also believe mentors help both sides of the equation. Education: BE/B.Tech from a Tier 1 or 2 institute You are important to us, lets stay connected! Every individual comes with a different set of skills and qualities so even if you dont tick all the boxes for the role today, we urge you to apply as there might be a suitable/unique role for you tomorrow! Role: Technical Lead Industry Type: IT Services & Consulting Department: Engineering - Software & QA Employment Type: Full Time, Permanent Role Category: Software Development Education UG: Any Graduate PG: Any Postgraduate Doctorate: Any Doctorate

Data Analyst gurgaon,haryana,india 6 - 11 years INR 15.0 - 20.0 Lacs P.A. On-site Full Time

Job description Role: Data Analyst Number of open positions: 5 Location: Pune / Gurgaon/ Bengaluru Must-Have: - 6 + years of work experience in large-scale Data applications doing Data Analysis, Data Mapping, Data Modelling and Data Transformations . Experience in Snowflake, Any relational DB Very good Verbal & Written communication & tracking skills Strong SQL , business requirement gatherings , S ource-to-target mappings (STTM) writing Should have exposure to data consolidation, transformation, and standardization from a different system Experience with Banking, Insurance, Property and Casualty Insurance, Retail Domain clients would be added advantage Be a self-starter person Must be able to integrate quickly into the team and work independently towards team goals Role & Responsibilities: Act as liaison between the technical and business teams. Should be able to connect with Client SME & understand Functionality, Data Should be able to do Data Profiling Understand the quality of Data and critical Data elements How to standardize Reference, and master data across systems Work Data mapping, Data lineage docs, data models, and design docs that allow stakeholders to understand data mappings & transformations Key Skills: Data Analysis, Mapping, Transformation, and Standardization. SQL, Data Modelling, Requirement Understanding, STTM writing, Insurance Domain understanding Role: Data Science & Analytics - Other Industry Type: IT Services & Consulting Department: Data Science & Analytics Employment Type: Full Time, Permanent Role Category: Data Science & Analytics - Other Education UG: Any Graduate PG: Any Postgraduate Doctorate: Any Doctorate

Senior Java Developer chennai,tamil nadu,india 5 - 10 years INR 10.0 - 15.0 Lacs P.A. On-site Full Time

Job description Role: Sr. Java Developer / Java Tech. Lead Exp.: 5+ years 5+ years experience developing software with the following (or similar) enterprise technologies. JAVA, Spring Framework, Microservices, Spring Security, SQL Preferred with hands on any of these tech. API Development & Integration Preferred with having hands on any cloud ( AWS, Azure, GCP - Google Cloud ) Required Skills and Experience: Deep understanding of service-side and middle-tier technologies and relational databases. Team player who proactively shares information and is unafraid to ask questions. Excellent communication, analytical and leadership skills. Strong experience with Agile SDLC processes (Scrum, Kanban, XP). Familiarity with micro services and distributed architectures. Experience managing and developing re-usable component libraries. Excellent debugging, problem solving & analytical skills. Experience developing, documenting, and managing re-usable component and utility libraries. Experience writing custom CSS to exactly match a components presentation to a UX design. Familiarity with Git (or similar version control systems) best practices in a team setting. Familiarity with Agile SDLC processes (Scrum, Kanban, XP). Aptitudes for Success: Passion to innovate and a desire to build great software. Natural curiosity and strong problem-solving skills. Fast learner and confidence to act proactively. Willingness to take ownership and responsibility. Strong desire to improve and optimize end-user experience. Job Description: Design, development, test and deploy highly scalable, high-reliability software in a business-critical enterprise environment. Work with product leadership and engineering colleagues to clarify requirements, design technical solutions, and develop complex features. Partner with cross-functional engineering teams to deliver functionality delivered across team boundaries. Collaborate with other developers to plan releases and ensure the team delivers on the committed plan. Participate in design and code reviews across the team and establish best practices. Collaborate with core teams on shared services such as infrastructure, security, and operations. Support and debug of critical transactions in the order processing flow. Work with stakeholders to address questions and unblock issues with Order Fulfillers Perform scalability and performance analysis as needed. Key Skill set: Java, Spring Framework, MySQL or MS SQL, Rest API , Microservices Education: Bachelors (preferably BE/B. Tech.) - Computer Science/IT Role: Technical Lead Industry Type: IT Services & Consulting Department: Engineering - Software & QA Employment Type: Full Time, Permanent Role Category: Software Development Education UG: Any Graduate PG: Any Postgraduate Doctorate: Any Doctorate

C / C++ Developer bengaluru,karnataka,india 4 - 9 years INR 15.0 - 25.0 Lacs P.A. On-site Full Time

Location Hybrid - Pune and Bangalore (Mahadevpura) Skills: Strong proficiency in C/C++ programming. Solid understanding of Linux kernel and device driver development. Experience with embedded systems development and debugging. Experience with one or more of the following: Platform Enablement (I2C, Thermals, Health, USB, Memory management/NAND/eMC) Options Enablement (GPU/NIC/Storage) Excellent problem-solving and analytical skills. Strong communication and teamwork skills. Software Requirements: Proficiency in Linux OS concepts and system programming. Experience with multithreading and multiprocessing. Experience with storage or server systems is a plus. Technical Skills: Programming Languages: C, C++ Operating Systems: Linux Hardware: I2C, Thermals, Health, USB, Memory management/NAND/eMC, GPU, NIC, Storage Tools: Git, Debugging tools, Code review tool Mandatory Skills C/C++ Developer, multithread /multiprocessing Linux OS concepts system programming Domain - Storage or Server OR Embedded C with RTOS and Linux. Note: Candidates with experience in Automobile/AUTOSAR/Networking will not be considered for this role. Role: Embedded Systems Engineer Industry Type: IT Services & Consulting Department: Engineering - Software & QA Employment Type: Full Time, Permanent Role Category: Software Development Education UG: Any Graduate PG: Any Postgraduate Doctorate: Any Doctorate

Data Modeller hyderabad,telangana,india 8 - 12 years INR 15.0 - 30.0 Lacs P.A. On-site Full Time

Job Title: Data Modeler Exp.: 8+ years Location: Pune, Gurugram, Hyderabad Job Summary The data architect designs, implements, and documents data architecture and enterprise data modelling solutions, which include the use of relational, dimensional, and NoSQL databases. These solutions support enterprise information management, business intelligence, machine learning, data science, and other business interests. The successful candidate will: Provides technical expertise in needs identification, data modelling, data movement and transformation mapping (source to target), automation and testing strategies, translating business needs into technical solutions with adherence to established data guidelines and approaches from a business unit or project perspective. Be responsible for the development of conceptual, logical, and physical data models, the implementation of RDBMS, operational data store (ODS), data marts, and data lakes on target platforms (SQL/NoSQL). Oversee and govern the expansion of existing data architecture and the optimization of data query performance via best practices. The candidate must be able to work independently and collaboratively. Leadership not only in the conventional sense, but also within a team we expect people to be leaders. Candidate should elicit leadership qualities such as Innovation, Critical thinking, optimism/positivity, Communication, Time Management, Collaboration, Problem-solving, Acting Independently, Knowledge sharing and Approachable. Essential Duties: Understand and translate business needs into data models supporting longterm solutions. Work with the Application Development team to implement data strategies, build data flows and develop conceptual data models. Create and maintain conceptual, logical and physical data models using best practices to ensure high data quality and reduced redundancy, along with corresponding metadata. Optimize and update logical and physical data models to support new and existing projects. Develop best practices for standard naming conventions and coding practices to ensure consistency of data models. Recommend opportunities for reuse of data models in new environments. Perform reverse engineering of physical data models from databases and SQL scripts. Evaluate data models and physical databases for variances and discrepancies. Validate business data objects for accuracy and completeness. Analyse data-related system integration challenges and propose appropriate solutions. Develop data models according to company standards. Guide System Analysts, Engineers, Programmers and others on project limitations and capabilities, performance requirements and interfaces. Review modifications to existing software to improve efficiency and performance. Examine new application design and recommend corrections if required. Assist with and support setting the data architecture direction (including data movement approach, architecture / technology strategy, and any other data-related considerations to ensure business value,) ensuring data architecture deliverables are developed, ensuring compliance to standards and guidelines, implementing the data architecture, and supporting technical developers at a project or business unit level. Coordinate and consult with the project manager, client business staff, client technical staff and project developers in data architecture best practices and anything else that is data related at the project or business unit levels. Required Qualifications: 7-10 Years industry implementation experience with one or more data modelling tools such as Erwin, ERStudio, PowerDesigner etc. Minimum of 8 years of data architecture, data modelling or similar experience 5-7 years of management experience required 5-7 years consulting experience preferred Experience working with dimensionally modelled data Bachelors degree or equivalent experience, Masters Degree Preferred 2021 ExlService Holdings, Inc. All rights reserved Understanding of cloud (Azure, AWS, GCP, Snowflake preferred) and on premises architectures Experience in data analysis and profiling Strong data warehousing and OLTP systems from an integration perspective Strong understanding of data integration best practices and concepts Strong SQL skills required scripting preferred Strong Knowledge of all phases of the system development life cycle Experience with major database and big data platforms (e.g. RDS, Aurora, Redshift, Databricks, MySQL, Oracle, PostgresSQL, Hadoop, Snowflake, etc.) Understanding and experience with major Data Architecture philosophies (Dimensional, ODS, Data Vault, etc.) Understanding of modern data warehouse capabilities and technologies such as real-time, cloud, Big Data. Preferred Skills & Experience: Design data models in ERWin:Use relational, dimensional, and NoSQL databases to design data models that meet business requirements Collaborate with others:Work with Business Representatives, Product owner, and Data Analysts to ensure data models meet business needs Ensure data quality:Ensure data models are scalable, efficient, and adherent to standardsusing Data Vault 2.0 Methodology Communication: Lead design discussions and create data models to integrate data in the Global Data Warehouse Data Governance: Understand and be able to use Collibra; Reference Data Management for all data related information. Document solutions:Be able to document solution design and present model to Project Team. Good to have:Commercial Insurance Domain knowledge and relevant work experience with Data Vault implementation Involved in documenting user requirements, business process and translation of business process into technical documents. Created Data Architect designs/documents for business processing requirements. Analyzed and audited Requirements & Business Processes. Prepared Business Matrix diagram to identify business processes, dimensions, facts, conformed dimensions, junk and degenerate dimensions. Worked as a part of the review and collaboration sessions regarding business processes, canonical (BOM) modeling. Profile the source system data to identify the business process and business requirement with operational system data. Designed/captured the business processes & then mapped them to the conceptual data model. Involved in modeling business processes through UML diagrams using Rational Rose. Prepared Test Scenarios based on understanding of business process flow. Assisted in process model development, identifying business process needs and requirements and modeling into useable, business-driven data processes. Participate in requirement gathering, design review and code review meetings Analyse business process and optimize existing extract stored procedures. Redefined many attributes and relationships in the reverse engineered model and cleansed unwanted table/columns as part of data analysis responsibility. Reviewed Entities and relationships in the engineered model and cleansed unwanted tables/ columns as part of data analysis responsibilities. Conducted logical data analysis and data modeling joint application design (JAD) sessions, documented data-related standards. Performed data analysis to support mapping and transformation of data from legacy systems to physical data models. Gather accurate data by data analysis and functional analysis. Conduct logical data modeling and data analysis. Involved in developing SQL queries for extracting data from test and production databases to perform data analysis and data quality checks. Worked with tools like TOAD for data analysis, SQL assistant, Tortoise SVN and red gate's SQL compare. Developed robust and efficient oracle PL/SQL procedures, packages and functions that were useful for day to day data analysis. Conducted data analysis on application views, functions and triggers, and tuned for performance, fixed missing/wrongly mapped data. Involved in the data mapping, data analysis and Gap analysis between the Legacy systems and the Vendor Packages. Performed Data Profiling and Data Analysis using SQL queries looking for Data issues, Data anomalies. Defined Data architecture Strategy, Data Management Strategy, data standards, Data architectural. Consolidated and generated database standards and naming conventions to enhance Enterprise Data Architecture processes. Developed an activity centered data architecture and a process architecture for automated data extraction. Analyzed, documented and articulated the customer's current data warehouse/data mart architecture. Generated Logical Data Models for old databases using Reverse engineering and documented in order to implement Forward Engineering procedures. Designed and implemented procedures for mapping interface data between legacy systems and a new implementation. Revised and rewrote their design specifications, established internal procedures for product certification. Designed standards and procedures to manage metadata in a structured and unstructured data environments. Mentored existing staff on data administrative processes procedures and standards. Created standards and procedures for metadata collection and repository maintenance. Designed and developed several complex database procedures, packages. Developed business specific custom reports using PL/SQL procedures. Established auditing procedures to ensure data integrity. Developed stored procedures to implement business logic. Design Functional Testing Standard Operating Procedures. Worked with different team to provide them with the essential stored procedures and packages and the required access to the data. Developed and optimized database structures, stored procedures, dynamic management views, DDL triggers, cursors and user defined functions. Role: Data Science & Analytics - Other Industry Type: IT Services & Consulting Department: Data Science & Analytics Employment Type: Full Time, Permanent Role Category: Data Science & Analytics - Other Education UG: Any Graduate PG: Any Postgraduate Doctorate: Any Doctorate

AWS Data Engineer gurgaon,haryana,india 5 - 10 years INR 20.0 - 35.0 Lacs P.A. On-site Full Time

Role:AWS Data Engineer & Sr. AWS Data Engineers Experience: 5 to 10 Years (6+ years exp. will be considered for Seniors and 8+ will be considered for Lead roles) Location:Pune, Gurgaon, Hyderabad, Bangalore (Hybrid) Our Values: Passion, Continuous Learning, Adaptability, Teamwork, Customer Centricity, Reliability Job Summary: We are seeking a highly motivated and experienced AWS Engineer to join our MarTech team in the NFL. This position requires an individual with AWS cloud experience and ambition to continually keep up with best practices when it comes to cloud development. The successful candidate must be able to seek out requirements and create best-in-class cloud-native solutions. The engineer must always create solutions that are repeatable, scalable and well-governed. They will deploy and rigorously test solutions to ensure they are robust and secure. The engineer will create and maintain diagrams associated with solutions deployed into production. Must have: 4-6 years of experience with AWS tech stack (S3, Glue, Redshift, Athena, Lambda, CloudWatch, SQS, IAM roles, CloudTrail). 3-5 years of SQL, Python & Pyspark programming experience. Experience working with ETL Tools . Experience with CDC mechanisms for database sources. Experience building distributed architecture-based systems, especially handling large data volumes and real-time distribution. Initiative and problem-solving skills when working independently. Expertise in building high-performance, highly scalable, cloud-based applications. Experience with SQL and No-SQL databases. Good collaboration and communication skills, highly self-driven, and take ownership. Experience in Writing well-documented, Clean, and Effective codes is a must. Good tohave: AWS Cloud Certifications. Knowledge and experience in designing and developing RESTful services. 1-3 years of experience in DBT with Data Modeling, Airflow, MWAA, SQL, Jinja templating, and packages/macros to build robust, performant, and reliable data transformation and feature extraction pipelines. 1-2 years of experience in Airbyte building ingestion modules for streaming, batch. Good experience building Real-Time streaming data pipelines with Kafka, Kinesis etc. Familiarity with Big Data Design Patterns, modeling, and architecture. Working knowledge of DevOps methodologies, including designing CI/CD pipelines. Good understanding of Data warehousing & Data Lake solutions concepts. Responsibilities: Create and maintain scalable, robust AWS architecture. Develop API-based, CDC, batch, and real-time data pipelines for structured and unstructured datasets. Enable integration with third-party systems as needed. Ensure solutions are repeatable and scalable across the organization. Work with client teams to gather requirements, develop solutions, and deploy them. Provide robust solution documentation for a wide audience. Collaborate with data professionals to bring applications to life, meeting business needs. Prioritize data protection and cloud security in all deliverables. Education: BE/B.Tech/MS/M.Tech/ME from reputed institute. Every individual comes with a different set of skills and qualities so even if you dont tick all the boxes for the role today, we urge you to apply as there might be a suitable/unique role for you tomorrow! Role: Data Engineer Industry Type: IT Services & Consulting Department: Engineering - Software & QA Employment Type: Full Time, Permanent Role Category: Software Development Education UG: Any Graduate PG: Any Postgraduate Doctorate: Any Doctorate

AWS Data Engineer Manager gurgaon,haryana,india 10 - 15 years INR 30.0 - 45.0 Lacs P.A. On-site Full Time

Role: AWS Data Engineering Manager Experience: 10+ Years Location: Pune, Gurgaon, Hyderabad, Bangalore (Hybrid) Our Values: Passion, Continuous Learning, Adaptability, Teamwork, Customer Centricity, Reliability Job Summary: We are seeking a highly motivated and experienced AWS Engineer to join our MarTech team in the NFL. This position requires an individual with AWS cloud experience and ambition to continually keep up with best practices when it comes to cloud development. The successful candidate must be able to seek out requirements and create best-in-class cloud-native solutions. The engineer must always create solutions that are repeatable, scalable and well-governed. They will deploy and rigorously test solutions to ensure they are robust and secure. The engineer will create and maintain diagrams associated with solutions deployed into production. Must have: 4-6 years of experience with AWS tech stack (S3, Glue, Redshift, Athena, Lambda, CloudWatch, SQS, IAM roles, CloudTrail). 3-5 years of SQL, Python & Pyspark programming experience. Experience working with ETL Tools . Experience with CDC mechanisms for database sources. Experience building distributed architecture-based systems, especially handling large data volumes and real-time distribution. Initiative and problem-solving skills when working independently. Expertise in building high-performance, highly scalable, cloud-based applications. Experience with SQL and No-SQL databases. Good collaboration and communication skills, highly self-driven, and take ownership. Experience in Writing well-documented, Clean, and Effective codes is a must. Good to have: AWS Cloud Certifications. Knowledge and experience in designing and developing RESTful services. 1-3 years of experience in DBT with Data Modeling, Airflow, MWAA, SQL, Jinja templating, and packages/macros to build robust, performant, and reliable data transformation and feature extraction pipelines. 1-2 years of experience in Airbyte building ingestion modules for streaming, batch. Good experience building Real-Time streaming data pipelines with Kafka, Kinesis etc. Familiarity with Big Data Design Patterns, modeling, and architecture. Working knowledge of DevOps methodologies, including designing CI/CD pipelines. Good understanding of Data warehousing & Data Lake solutions concepts. Responsibilities: Create and maintain scalable, robust AWS architecture. Develop API-based, CDC, batch, and real-time data pipelines for structured and unstructured datasets. Enable integration with third-party systems as needed. Ensure solutions are repeatable and scalable across the organization. Work with client teams to gather requirements, develop solutions, and deploy them. Provide robust solution documentation for a wide audience. Collaborate with data professionals to bring applications to life, meeting business needs. Prioritize data protection and cloud security in all deliverables. Education: BE/B.Tech/MS/M.Tech/ME from reputed institute. Every individual comes with a different set of skills and qualities so even if you dont tick all the boxes for the role today, we urge you to apply as there might be a suitable/unique role for you tomorrow! Role: Data Engineer Industry Type: IT Services & Consulting Department: Engineering - Software & QA Employment Type: Full Time, Permanent Role Category: Software Development Education UG: Any Graduate PG: Any Postgraduate Doctorate: Any Doctorate

AI / ML Architect bengaluru,karnataka,india 5 - 8 years INR 15.0 - 30.0 Lacs P.A. On-site Full Time

Role:AI-ML Architect Experience: 5 to 7 Years Location:Pune, Gurgaon, Bangalore Hybrid Shift Time: 12:00 PM - 10:00 PM Clairvoyant an EXL Company is a global technology consulting and services company founded in 2012, headquartered in Chandler, US, and has delivery centers across the globe. We help organizations maximize the value of data by providing data engineering, analytics, machine learning, and user experience consulting and development projects to multiple Fortune 500 clients. Clairvoyant clients rely on its deep vertical knowledge and best-in-class services to drive revenue growth, boost operational efficiencies, and manage risk and compliance. Our team of experts with direct industry experience in data engineering, analytics, machine learning, and user experience has your back at every step. Our Values: Passion, Continuous Learning, Adaptability, Teamwork, Customer Centricity, Reliability Must-Have Skills: 4+ years of experience in data science/computer science/AI-related roles Bachelors degree or higher in computer science, data science, engineering, or a related field Experience in using major language models like GPT, BERT, Google Gemini, Llama, Claude, etc. for innovative use cases Proficient in Python and has experience with cloud-based platforms (AWS, Azure, GCP, etc.) Experience with GenAI tools like Open AI, Lang chain, vector DB, RAG pipelines, and prompt engineering. Preferable experience in fine-tuning open-source models (LLaMA, BERT, Hugging Face, etc.) Strong analytical & problem-solving skills Strong verbal and written communication skills Strong business acumen & demonstrated aptitude for analytics that incite action Knowledge or experience with Lang chain, Llama Index, or DSPy and associated libraries. Good-to-Have Skills Experience with large enterprise data on hybrid data storage environments with multiple cloud & on-prem storage systems. Familiarity with Kubernetes, ability to build pipelines, be familiarity with containerizing and creating API endpoints. Responsibilities Building enterprise-wide Conversational BI chatbot that provides data & analytics information and insights on business questions. Design end-to-end Conversational AI architecture optimizing for cost, ensuring chatbot performance, accuracy, security, user experience, and minimizing latency. Design AI agents (for Text to SQL, Table selection, column pruning, intent agent, etc ), and leverage the right LLMs and Gen AI frameworks to ensure scalability, performance, and alignment with the company's tech stack. Led & mentored AI developers and data engineers to create LLM/AI applications and deploy them on the cloud or on-prem. Design storage, caching, and compute layers for efficient data retrieval and outputs in the AI chatbot. Architect, deploy and optimize Retrieval-Augmented Generation (RAG) systems and work with vector databases to implement query efficiency and ensure high-quality and relevant outputs for users. Implement Lang Chain or similar frameworks to build custom pipelines for Gen AI use cases Develop new prompt engineering methods to get desired outputs on conversational or other AI apps Research and integrate the latest advancements in generative Al technologies. Experiment with fine-tuning and adapting large language models (like GPT, and BERT) for new, innovative use cases. Able to filter through ambiguity present in business problems & come up with a well-structured methodology. Education: Education - Minimum of 15 years of formal education BE/BTech or Graduate / Postgraduate in Computer Science / Information Technology. Every individual comes with a different set of skills and qualities so even if you don't tick all the boxes for the role today, we urge you to apply as there might be a suitable/unique role for you tomorrow! Role: NLP / DL Engineering / Architect Industry Type: IT Services & Consulting Department: Data Science & Analytics Employment Type: Full Time, Permanent Role Category: Data Science & Machine Learning Education UG: Any Graduate PG: Any Postgraduate Doctorate: Any Doctorate