Jobs
Interviews

1103 Dataflow Jobs - Page 7

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Our technology services client is seeking multiple DevSecOps Security Engineer to join their team on a contract basis. These positions offer a strong potential for conversion to full-time employment upon completion of the initial contract period. Below are further details about the role: Role: DevSecOps Security Engineer Experience: 5- 7 Years Location: Mumbai, Pune, Hyderabad, Bangalore, Chennai, Kolkata Notice Period: Immediate- 15 Days Mandatory Skills: Devops Support, GitHub Actions, CI/CD Pipelines, Argocd , Snyk, multicloud (AWS/AZure/GCP) GIT, MS Tools, Docker, Kubernetes, Jfrog, SCA & SAST Job Description: A security expert who can write code as needed and knows the difference between Object vs Class vs Function programming. Strong passion and thorough understanding of what it takes to build and operate secure, reliable systems at scale. Strong passion and technical expertise to automate security functions via code. Strong technical expertise with Application, Cloud, Data, and Network Security best practices. Strong technical expertise with multi-cloud environments, including container/serverless and other microservice architectures. Strong technical expertise with older technology stacks, including mainframes and monolithic architectures. Strong technical expertise with SDLC, CI/CD tools, and Deployment Automation. Strong technical expertise with operating security for Windows Server and Linux Server systems. Strong technical expertise with configuration management, version control, and DevOps operational support. Strong experience with implementing security measures for both applications and data, with an understanding of the unique security requirements of data warehouse technologies such as Snowflake. Role Responsibilities Development & Enforcement Develop and enforce engineering security policies and standards. Develop and enforce data security policies and standards. Drive security awareness across the organization. Collaboration & Expertise Collaborate with Engineering and Business teams to develop secure engineering practices. Serve as the Subject Matter Expert for Application Security. Work with cross-functional teams to ensure security is considered throughout the software development lifecycle Analysis & Configuration Analyze, develop, and configure security solutions across multi-cloud, on-premises, and colocation environments, ensuring application security, integrity, confidentiality, and availability of data. Lead security testing, vulnerability analysis, and documentation. Operational Support Participate in operational on-call duties to support infrastructure across multiple regions and environments (cloud, on-premises, colocation). Develop incident response and recovery strategies. Qualifications Basic Qualifications 5+ years of experience in developing and deploying security technologies. A minimum of a Bachelor’s degree in Computer Science, Software Development, Software Engineering, or a related field, or equivalent alternative education, skills, and/or practical experience is required. Experience with modern Software Development Lifecycles and CI/CD practices Experience for the remediation of vulnerabilities sourced from Static Analysis (SAST), Open Source Scanning (SCA), Mobile Scanning (MAST) and API Scanning Proficiency in Public Clo\ud (AWS/Azure/GCP) & Network Security. Experience with Docker, Kubernetes, Security-as-Code, and Infrastructure-as-Code. Experience with one or more general-purpose programming/script languages including but not limited to: Java, C/C++, C#, Python, JavaScript, Shell Script, PowerShell. Strong experience with implementing and managing data protection measures and compliance with data protection regulations (e.g., GDPR, CCPA). Preferred Qualifications Strong technical expertise with Architecting Public Cloud solutions and processes. Strong technical expertise with Networking and Software-Defined Networking (SDN) principles. Strong technical expertise with developing and interpreting Network, Sequence, and Dataflow diagrams. Familiarity with OWASP Application Security Verification Standard Experience with direct, remote, and virtual teams. Understanding of at least one compliance framework (HIPAA, HITRUST, PCI, NIST, CSA). Strong technical expertise with Static Analysis, Open Source Scanning, Mobile Scanning, and API Scanning security solutions for data warehouses and big data platforms, particularly with technologies like GitHub Advanced Security, CodeQL, Checkmarx, and Snyk. Strong technical expertise in defining and implementing cyber resilience standards, policies, and programs for distributed cloud and network infrastructure, ensuring robust redundancy and system reliability. Education A minimum of a Bachelor’s degree in Computer Science, Software Development, Software Engineering, or a related field, or equivalent alternative education, skills, and/or practical experience is required. If you are interested, share the updated resume to madhuri.p@s3staff.com

Posted 1 week ago

Apply

6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Overview: We are looking for a skilled and motivated Data Engineer with strong experience in Python programming and Google Cloud Platform (GCP) to join our data engineering team. The ideal candidate will be responsible for designing, developing, and maintaining robust and scalable ETL (Extract, Transform, Load) data pipelines. The role involves working with various GCP services, implementing data ingestion and transformation logic, and ensuring data quality and consistency across systems. Key Responsibilities: Design, develop, test, and maintain scalable ETL data pipelines using Python Work extensively on Google Cloud Platform (GCP) services such as: Dataflow for real-time and batch data processing Cloud Functions for lightweight serverless compute BigQuery for data warehousing and analytics Cloud Composer for orchestration of data workflows (based on Apache Airflow) Google Cloud Storage (GCS) for managing data at scale IAM for access control and security Cloud Run for containerized applications Perform data ingestion from various sources and apply transformation and cleansing logic to ensure high-quality data delivery Implement and enforce data quality checks, validation rules, and monitoring Collaborate with data scientists, analysts, and other engineering teams to understand data needs and deliver efficient data solutions Manage version control using GitHub and participate in CI/CD pipeline deployments for data projects Write complex SQL queries for data extraction and validation from relational databases such as SQL Server, Oracle, or PostgreSQL Document pipeline designs, data flow diagrams, and operational support procedures Required Skills: 4–6 years of hands-on experience in Python for backend or data engineering projects Strong understanding and working experience with GCP cloud services (especially Dataflow, BigQuery, Cloud Functions, Cloud Composer, etc.) Solid understanding of data pipeline architecture, data integration, and transformation techniques Experience in working with version control systems like GitHub and knowledge of CI/CD practices Strong experience in SQL with at least one enterprise database (SQL Server, Oracle, PostgreSQL, etc.) Good to Have (Optional Skills): Experience working with Snowflake cloud data platform Hands-on knowledge of Databricks for big data processing and analytics Familiarity with Azure Data Factory (ADF) and other Azure data engineering tools Education: Bachelor's degree in Computer Science, a related field, or equivalent experience

Posted 1 week ago

Apply

10.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

About The Organisation DataFlow Group is a pioneering global provider of specialized Primary Source Verification (PSV) solutions, and background screening and immigration compliance services that assist public and private organizations in mitigating risks to make informed, cost-effective decisions regarding their Applicants and Registrants. About The Role We are looking for a highly skilled and experienced Senior ETL & Data Streaming Engineer with over 10 years of experience to play a pivotal role in designing, developing, and maintaining our robust data pipelines. The ideal candidate will have deep expertise in both batch ETL processes and real-time data streaming technologies, coupled with extensive hands-on experience with AWS data services. A proven track record of working with Data Lake architectures and traditional Data Warehousing environments is essential. Duties And Responsibilities Design, develop, and implement highly scalable, fault-tolerant, and performant ETL processes using industry-leading ETL tools to extract, transform, and load data from various source systems into our Data Lake and Data Warehouse. Architect and build batch and real-time data streaming solutions using technologies like Talend, Informatica, Apache Kafka or AWS Kinesis to support immediate data ingestion and processing requirements. Utilize and optimize a wide array of AWS data services Collaborate with data architects, data scientists, and business stakeholders to understand data requirements and translate them into efficient data pipeline solutions. Ensure data quality, integrity, and security across all data pipelines and storage solutions. Monitor, troubleshoot, and optimize existing data pipelines for performance, cost-efficiency, and reliability. Develop and maintain comprehensive documentation for all ETL and streaming processes, data flows, and architectural designs. Implement data governance policies and best practices within the Data Lake and Data Warehouse environments. Mentor junior engineers and contribute to fostering a culture of technical excellence and continuous improvement. Stay abreast of emerging technologies and industry best practices in data engineering, ETL, and streaming. Qualifications 10+ years of progressive experience in data engineering, with a strong focus on ETL, ELT and data pipeline development. Deep expertise in ETL Tools : Extensive hands-on experience with commercial ETL tools (Talend) Strong proficiency in Data Streaming Technologies : Proven experience with real-time data ingestion and processing using platforms such as AWS Glue,Apache Kafka, AWS Kinesis, or similar. Extensive AWS Data Services Experience : Proficiency with AWS S3 for data storage and management. Hands-on experience with AWS Glue for ETL orchestration and data cataloging. Familiarity with AWS Lake Formation for building secure data lakes. Good to have experience with AWS EMR for big data processing Data Warehouse (DWH) Knowledge : Strong background in traditional data warehousing concepts, dimensional modeling (Star Schema, Snowflake Schema), and DWH design principles. Programming Languages : Proficient in SQL and at least one scripting language (e.g., Python, Scala) for data manipulation and automation. Database Skills : Strong understanding of relational databases and NoSQL databases. Version Control : Experience with version control systems (e.g., Git). Problem-Solving : Excellent analytical and problem-solving skills with a keen eye for detail. Communication : Strong verbal and written communication skills, with the ability to articulate complex technical concepts to both technical and non-technical audiences. (ref:hirist.tech)

Posted 1 week ago

Apply

10.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

About The Organisation DataFlow Group is a pioneering global provider of specialized Primary Source Verification (PSV) solutions, and background screening and immigration compliance services that assist public and private organizations in mitigating risks to make informed, cost-effective decisions regarding their Applicants and Registrants. About The Role Were currently searching for an experienced business analyst to help guide our organization to the future. From researching progressive systems solutions to evaluating their impacts, the ideal candidate will be a detailed planner, expert communicator, and top-notch analyst. This person should also be wholly committed to the discovery and development of innovative solutions in an ever changing digital landscape. Duties And Responsibilities Strategic Alignment : Collaborate closely with senior leadership (e.g., C-suite executives, Directors) to understand their strategic goals, key performance indicators (KPIs), and critical information needs. Requirements Elicitation & Analysis Facilitate workshops, interviews, and other elicitation techniques to gather detailed business requirements for corporate analytics dashboards. Analyze and document these requirements clearly, concisely, and unambiguously, ensuring alignment with overall business strategy. User Story & Acceptance Criteria Definition Translate high-level business requirements into detailed user stories with clear and measurable acceptance criteria for the development team. Data Understanding & Mapping Work with data owners and subject matter experts to understand underlying data sources, data quality, and data governance policies relevant to the dashboards. Collaborate with the development team on data mapping and transformation logic. Dashboard Design & Prototyping Collaboration Partner with UI/UX designers and the development team to conceptualize and prototype dashboard layouts, visualizations, and user interactions that effectively communicate key insights to senior stakeholders. Provide feedback and ensure designs meet business requirements and usability standards. Stakeholder Communication & Management Act as the central point of contact between senior leadership and the development team. Proactively communicate progress, challenges, and key decisions to all stakeholders. Manage expectations and ensure alignment throughout the project lifecycle. Prioritization & Backlog Management Work with stakeholders to prioritize dashboard development based on business value and strategic importance. Maintain and groom the product backlog, ensuring it reflects current priorities and requirements. Testing & Validation Support Support the testing phase by reviewing test plans, participating in user acceptance testing (UAT), and ensuring the delivered dashboards meet the defined requirements and acceptance criteria. Training & Documentation Develop and deliver training materials and documentation for senior users on how to effectively utilize the new dashboards and interpret the presented data. Continuous Improvement Gather feedback from users post-implementation and work with the development team to identify areas for improvement and future enhancements to the corporate analytics platform. Industry Best Practices Stay abreast of the latest trends and best practices in business intelligence, data visualization, and analytics. Project Management Develop and maintain project plans for agreed initiatives in collaboration with stakeholders. Monitor project progress against defined timelines, prepare and present regular project status reports to stakeholders. Qualifications Bachelor's degree in Business Administration, Computer Science, Information Systems, Economics, Finance, or a related field. Minimum of 10+ years of experience as a Business Analyst, with a significant focus on business intelligence, data analytics, and dashboard development projects. Proven experience in leading requirements gathering, and analysis efforts with senior leadership and executive stakeholders, and able to translate complex business requirements into clear and actionable technical specifications. Demonstrable experience in managing BI and dashboarding projects, including project planning, risk management, and stakeholder communication Strong understanding of reporting, data warehousing concepts, ETL processes and data modeling principles. Excellent knowledge of data visualization best practices and principles of effective dashboard design. Experience working with common business intelligence and data visualization tools (e.g., Tableau, Power BI, Qlik Sense). Exceptional communication (written and verbal), presentation, and interpersonal skills, with the ability to effectively communicate with both business and technical audiences. Strong facilitation and negotiation skills to lead workshops and drive consensus among diverse stakeholder groups. Excellent analytical and problem-solving skills with keen attention to detail. Ability to work independently and manage multiple priorities in a fast-paced environment. Experience with Agile methodologies (e.g., Scrum, Kanban). (ref:hirist.tech)

Posted 1 week ago

Apply

4.0 - 5.0 years

0 Lacs

Greater Kolkata Area

On-site

Role : Data Integration Specialist Experience : 4 - 5 Years Location : India Employment Type : Full-time About The Role We are looking for a highly skilled and motivated Data Integration Specialist with 4 to 5 years of hands-on experience to join our growing team in India. In this role, you will be responsible for designing, developing, implementing, and maintaining robust data pipelines and integration solutions that connect disparate systems and enable seamless data flow across the enterprise. You'll play a crucial part in ensuring data availability, quality, and consistency for various analytical and operational needs. Key Responsibilities ETL/ELT Development : Design, develop, and optimize ETL (Extract, Transform, Load) and ELT (Extract, Load, Transform) processes using industry-standard tools and technologies. Data Pipeline Construction : Build and maintain scalable and efficient data pipelines from various source systems (databases, APIs, flat files, streaming data, cloud sources) to target data warehouses, data lakes, or analytical platforms. Tool Proficiency : Hands-on experience with at least one major ETL tool such as Talend, Informatica PowerCenter, SSIS, Apache NiFi, IBM DataStage, or similar platforms. Database Expertise : Proficient in writing and optimizing complex SQL queries across various relational databases (e.g., SQL Server, Oracle, PostgreSQL, MySQL) and NoSQL databases. Cloud Data Services : Experience with cloud-based data integration services on platforms like AWS (Glue, Lambda, S3, Redshift), Azure (Data Factory, Synapse Analytics), or GCP (Dataflow, BigQuery) is highly desirable. Scripting : Develop and maintain scripts (e.g., Python, Shell scripting) for automation, data manipulation, and orchestration of data processes. Data Modeling : Understand and apply data modeling concepts (e.g., dimensional modeling, Kimball/Inmon methodologies) for data warehousing solutions. Data Quality & Governance : Implement data quality checks, validation rules, and participate in establishing data governance best practices to ensure data accuracy and reliability. Performance Tuning : Monitor, troubleshoot, and optimize data integration jobs and pipelines for performance, scalability, and reliability. Collaboration & Documentation : Work closely with data architects, data analysts, business intelligence developers, and business stakeholders to gather requirements, design solutions, and deliver data assets. Create detailed technical documentation for data flows, mappings, and transformations. Problem Solving : Identify and resolve complex data-related issues, ensuring data integrity and consistency. Qualifications Education : Bachelor's or Master's degree in Computer Science, Information Technology, Engineering, or a related quantitative field. Experience : 4 to 5 years of dedicated experience in data integration, ETL development, or data warehousing. Core Skills : Strong proficiency in SQL and at least one leading ETL tool (as listed above). Programming : Hands-on experience with Python or Shell scripting for data manipulation and automation. Databases : Solid understanding of relational database concepts and experience with various database systems. Analytical Thinking : Excellent analytical, problem-solving, and debugging skills with attention to detail. Communication : Strong verbal and written communication skills to articulate technical concepts to both technical and non-technical audiences. Collaboration : Ability to work effectively in a team environment and collaborate with cross-functional teams. Preferred/Bonus Skills Experience with real-time data integration or streaming technologies (e.g., Kafka, Kinesis). Knowledge of Big Data technologies (e.g., Hadoop, Spark). Familiarity with CI/CD pipelines for data integration projects. Exposure to data visualization tools (e.g., Tableau, Power BI). Experience in specific industry domains (e.g., Finance, Healthcare, Retail) (ref:hirist.tech)

Posted 1 week ago

Apply

5.0 - 7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities Skilled Multiple GCP services - GCS, BigQuery, Cloud SQL, Dataflow, Pub/Sub, Cloud Run, Workflow, Composer, Error reporting, Log explorer etc. Must have Python and SQL work experience & Proactive, collaborative and ability to respond to critical situation Ability to analyse data for functional business requirements & front face customer Preferred Education Master's Degree Required Technical And Professional Expertise 5 to 7 years of relevant experience working as technical analyst with Big Query on GCP platform. Skilled in multiple GCP services - GCS, Cloud SQL, Dataflow, Pub/Sub, Cloud Run, Workflow, Composer, Error reporting, Log explorer You love collaborative environments that use agile methodologies to encourage creative design thinking and find innovative ways to develop with cutting edge technologies Ambitious individual who can work under their own direction towards agreed targets/goals and with creative approach to work Preferred Technical And Professional Experience Create up to 3 bullets maxIntuitive individual with an ability to manage change and proven time management Proven interpersonal skills while contributing to team effort by accomplishing related results as needed Up-to-date technical knowledge by attending educational workshops, reviewing publications (encouraging then to focus on required skills)

Posted 1 week ago

Apply

9.0 years

5 - 10 Lacs

Thiruvananthapuram

On-site

9 - 12 Years 1 Opening Trivandrum Role description Role Proficiency: Leverage expertise in a technology area (e.g. Infromatica Transformation Terradata data warehouse Hadoop Analytics) Responsible for Architecture for a small/mid-size projects. Outcomes: Implement either data extract and transformation a data warehouse (ETL Data Extracts Data Load Logic Mapping Work Flows stored procedures data warehouse) data analysis solution data reporting solutions or cloud data tools in any one of the cloud providers(AWS/AZURE/GCP) Understand business workflows and related data flows. Develop design for data acquisitions and data transformation or data modelling; applying business intelligence on data or design data fetching and dashboards Design information structure work-and dataflow navigation. Define backup recovery and security specifications Enforce and maintain naming standards and data dictionary for data models Provide or guide team to perform estimates Help team to develop proof of concepts (POC) and solution relevant to customer problems. Able to trouble shoot problems while developing POCs Architect/Big Data Speciality Certification in (AWS/AZURE/GCP/General for example Coursera or similar learning platform/Any ML) Measures of Outcomes: Percentage of billable time spent in a year for developing and implementing data transformation or data storage Number of best practices documented in any new tool and technology emerging in the market Number of associates trained on the data service practice Outputs Expected: Strategy & Planning: Create or contribute short-term tactical solutions to achieve long-term objectives and an overall data management roadmap Implement methods and procedures for tracking data quality completeness redundancy and improvement Ensure that data strategies and architectures meet regulatory compliance requirements Begin engaging external stakeholders including standards organizations regulatory bodies operators and scientific research communities or attend conferences with respect to data in cloud Operational Management : Help Architects to establish governance stewardship and frameworks for managing data across the organization Provide support in implementing the appropriate tools software applications and systems to support data technology goals Collaborate with project managers and business teams for all projects involving enterprise data Analyse data-related issues with systems integration compatibility and multi-platform integration Project Control and Review : Provide advice to teams facing complex technical issues in the course of project delivery Define and measure project and program specific architectural and technology quality metrics Knowledge Management & Capability Development : Publish and maintain a repository of solutions best practices and standards and other knowledge articles for data management Conduct and facilitate knowledge sharing and learning sessions across the team Gain industry standard certifications on technology or area of expertise Support technical skill building (including hiring and training) for the team based on inputs from project manager /RTE’s Mentor new members in the team in technical areas Gain and cultivate domain expertise to provide best and optimized solution to customer (delivery) Requirement gathering and Analysis: Work with customer business owners and other teams to collect analyze and understand the requirements including NFRs/define NFRs Analyze gaps/ trade-offs based on current system context and industry practices; clarify the requirements by working with the customer Define the systems and sub-systems that define the programs People Management: Set goals and manage performance of team engineers Provide career guidance to technical specialists and mentor them Alliance Management: Identify alliance partners based on the understanding of service offerings and client requirements In collaboration with Architect create a compelling business case around the offerings Conduct beta testing of the offerings and relevance to program Technology Consulting: In collaboration with Architects II and III analyze the application and technology landscapers process and tolls to arrive at the architecture options best fit for the client program Analyze Cost Vs Benefits of solution options Support Architects II and III to create a technology/ architecture roadmap for the client Define Architecture strategy for the program Innovation and Thought Leadership: Participate in internal and external forums (seminars paper presentation etc) Understand clients existing business at the program level and explore new avenues to save cost and bring process efficiency Identify business opportunities to create reusable components/accelerators and reuse existing components and best practices Project Management Support: Assist the PM/Scrum Master/Program Manager to identify technical risks and come-up with mitigation strategies Stakeholder Management: Monitor the concerns of internal stakeholders like Product Managers & RTE’s and external stakeholders like client architects on Architecture aspects. Follow through on commitments to achieve timely resolution of issues Conduct initiatives to meet client expectations Work to expand professional network in the client organization at team and program levels New Service Design: Identify potential opportunities for new service offerings based on customer voice/ partner inputs Conduct beta testing / POC as applicable Develop collaterals guides for GTM Skill Examples: Use data services knowledge creating POC to meet a business requirements; contextualize the solution to the industry under guidance of Architects Use technology knowledge to create Proof of Concept (POC) / (reusable) assets under the guidance of the specialist. Apply best practices in own area of work helping with performance troubleshooting and other complex troubleshooting. Define decide and defend the technology choices made review solution under guidance Use knowledge of technology t rends to provide inputs on potential areas of opportunity for UST Use independent knowledge of Design Patterns Tools and Principles to create high level design for the given requirements. Evaluate multiple design options and choose the appropriate options for best possible trade-offs. Conduct knowledge sessions to enhance team's design capabilities. Review the low and high level design created by Specialists for efficiency (consumption of hardware memory and memory leaks etc.) Use knowledge of Software Development Process Tools & Techniques to identify and assess incremental improvements for software development process methodology and tools. Take technical responsibility for all stages in the software development process. Conduct optimal coding with clear understanding of memory leakage and related impact. Implement global standards and guidelines relevant to programming and development come up with 'points of view' and new technological ideas Use knowledge of Project Management & Agile Tools and Techniques to support plan and manage medium size projects/programs as defined within UST; identifying risks and mitigation strategies Use knowledge of Project Metrics to understand relevance in project. Collect and collate project metrics and share with the relevant stakeholders Use knowledge of Estimation and Resource Planning to create estimate and plan resources for specific modules or small projects with detailed requirements or user stories in place Strong proficiencies in understanding data workflows and dataflow Attention to details High analytical capabilities Knowledge Examples: Data visualization Data migration RDMSs (relational database management systems SQL Hadoop technologies like MapReduce Hive and Pig. Programming languages especially Python and Java Operating systems like UNIX and MS Windows. Backup/archival software. Additional Comments: AI Architect Role Summary: Hands-on AI Architect with strong expertise in Deep Learning, Generative AI, and real-world AI/ML systems. The role involves leading the architecture, development, and deployment of AI agent-based solutions, supporting initiatives such as intelligent automation, anomaly detection, and GenAI-powered assistants across enterprise operations and engineering. This is a hands-on role ideal for someone who thrives in fast-paced environments, is passionate about AI innovations, and can adapt across multiple opportunities based on business priorities. Key Responsibilities: • Design and architect AI-based solutions including multi-agent GenAI systems using LLMs and RAG pipelines. • Build POCs, prototypes, and production-grade AI components for operations, support automation, and intelligent assistants. • Lead end-to-end development of AI agents for use cases such as triage, RCA automation, and predictive analytics. • Leverage GenAI (LLMs) and Time Series models to drive intelligent observability and performance management. • Work closely with product, engineering, and operations teams to align solutions with domain and customer needs. • Own model lifecycle from experimentation to deployment using modern MLOps and LLMOps practices. • Ensure scalable, secure, and cost-efficient implementation across AWS and Azure cloud environments. • Key Skills & Technology Areas: • AI/ML Expertise: 8+ years in AI/ML, with hands-on experience in deep learning, model deployment, and GenAI. • LLMs & Frameworks: GPT-3+, Claude, LLAMA3, LangChain, LangGraph, Transformers (BERT, T5), RAG pipelines, LLMOps. • Programming: Python (advanced), Keras, PyTorch, Pandas, FastAPI, Celery (for agent orchestration), Redis. • Modeling & Analytics: Time Series Forecasting, Predictive Modeling, Synthetic Data Generation. • Data & Storage: ChromaDB, Pinecone, FAISS, DynamoDB, PostgreSQL, Azure Synapse, Azure Data Factory. • Cloud & Tools: o AWS (Bedrock, SageMaker, Lambda), o Azure (Azure ML, Azure Databricks, Synapse), o GCP (Vertex AI – optional) • Observability Integration: Splunk, ELK Stack, Prometheus. • DevOps/MLOps: Docker, GitHub Actions, Kubernetes, CI/CD pipelines, model monitoring & versioning. • Architectural Patterns: Microservices, Event-Driven Architecture, Multi-Agent Systems, API-first Design. Other Requirements: • Proven ability to work independently and collaboratively in agile, innovation-driven teams. • Strong problem-solving mindset and product-oriented thinking. • Excellent communication and technical storytelling skills. • Flexibility to work across multiple opportunities based on business priorities. • Experience in Telecom, E- Commerce, or Enterprise IT Operations is a plus. ________________________________________ ________________________________________ ________________________________________ Skills python,pandas,AIML,GENAI About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.

Posted 1 week ago

Apply

5.0 - 7.0 years

0 Lacs

Thiruvananthapuram

On-site

5 - 7 Years 2 Openings Trivandrum Role description Role Proficiency: This role requires proficiency in data pipeline development including coding and testing data pipelines for ingesting wrangling transforming and joining data from various sources. Must be skilled in ETL tools such as Informatica Glue Databricks and DataProc with coding expertise in Python PySpark and SQL. Works independently and has a deep understanding of data warehousing solutions including Snowflake BigQuery Lakehouse and Delta Lake. Capable of calculating costs and understanding performance issues related to data solutions. Outcomes: Act creatively to develop pipelines and applications by selecting appropriate technical options optimizing application development maintenance and performance using design patterns and reusing proven solutions.rnInterpret requirements to create optimal architecture and design developing solutions in accordance with specifications. Document and communicate milestones/stages for end-to-end delivery. Code adhering to best coding standards debug and test solutions to deliver best-in-class quality. Perform performance tuning of code and align it with the appropriate infrastructure to optimize efficiency. Validate results with user representatives integrating the overall solution seamlessly. Develop and manage data storage solutions including relational databases NoSQL databases and data lakes. Stay updated on the latest trends and best practices in data engineering cloud technologies and big data tools. Influence and improve customer satisfaction through effective data solutions. Measures of Outcomes: Adherence to engineering processes and standards Adherence to schedule / timelines Adhere to SLAs where applicable # of defects post delivery # of non-compliance issues Reduction of reoccurrence of known defects Quickly turnaround production bugs Completion of applicable technical/domain certifications Completion of all mandatory training requirements Efficiency improvements in data pipelines (e.g. reduced resource consumption faster run times). Average time to detect respond to and resolve pipeline failures or data issues. Number of data security incidents or compliance breaches. Outputs Expected: Code Development: Develop data processing code independently ensuring it meets performance and scalability requirements. Define coding standards templates and checklists. Review code for team members and peers. Documentation: Create and review templates checklists guidelines and standards for design processes and development. Create and review deliverable documents including design documents architecture documents infrastructure costing business requirements source-target mappings test cases and results. Configuration: Define and govern the configuration management plan. Ensure compliance within the team. Testing: Review and create unit test cases scenarios and execution plans. Review the test plan and test strategy developed by the testing team. Provide clarifications and support to the testing team as needed. Domain Relevance: Advise data engineers on the design and development of features and components demonstrating a deeper understanding of business needs. Learn about customer domains to identify opportunities for value addition. Complete relevant domain certifications to enhance expertise. Project Management: Manage the delivery of modules effectively. Defect Management: Perform root cause analysis (RCA) and mitigation of defects. Identify defect trends and take proactive measures to improve quality. Estimation: Create and provide input for effort and size estimation for projects. Knowledge Management: Consume and contribute to project-related documents SharePoint libraries and client universities. Review reusable documents created by the team. Release Management: Execute and monitor the release process to ensure smooth transitions. Design Contribution: Contribute to the creation of high-level design (HLD) low-level design (LLD) and system architecture for applications business components and data models. Customer Interface: Clarify requirements and provide guidance to the development team. Present design options to customers and conduct product demonstrations. Team Management: Set FAST goals and provide constructive feedback. Understand team members' aspirations and provide guidance and opportunities for growth. Ensure team engagement in projects and initiatives. Certifications: Obtain relevant domain and technology certifications to stay competitive and informed. Skill Examples: Proficiency in SQL Python or other programming languages used for data manipulation. Experience with ETL tools such as Apache Airflow Talend Informatica AWS Glue Dataproc and Azure ADF. Hands-on experience with cloud platforms like AWS Azure or Google Cloud particularly with data-related services (e.g. AWS Glue BigQuery). Conduct tests on data pipelines and evaluate results against data quality and performance specifications. Experience in performance tuning of data processes. Expertise in designing and optimizing data warehouses for cost efficiency. Ability to apply and optimize data models for efficient storage retrieval and processing of large datasets. Capacity to clearly explain and communicate design and development aspects to customers. Ability to estimate time and resource requirements for developing and debugging features or components. Knowledge Examples: Knowledge Examples Knowledge of various ETL services offered by cloud providers including Apache PySpark AWS Glue GCP DataProc/DataFlow Azure ADF and ADLF. Proficiency in SQL for analytics including windowing functions. Understanding of data schemas and models relevant to various business contexts. Familiarity with domain-related data and its implications. Expertise in data warehousing optimization techniques. Knowledge of data security concepts and best practices. Familiarity with design patterns and frameworks in data engineering. Additional Comments: Data Engineering Role Summary: Skilled Data Engineer with strong Python programming skills and experience in building scalable data pipelines across cloud environments. The candidate should have a good understanding of ML pipelines and basic exposure to GenAI solutioning. This role will support large-scale AI/ML and GenAI initiatives by ensuring high-quality, contextual, and real-time data availability. ________________________________________ Key Responsibilities: • Design, build, and maintain robust, scalable ETL/ELT data pipelines in AWS/Azure environments. • Develop and optimize data workflows using PySpark, SQL, and Airflow. • Work closely with AI/ML teams to support training pipelines and GenAI solution deployments. • Integrate data with vector databases like ChromaDB or Pinecone for RAG-based pipelines. • Collaborate with solution architects and GenAI leads to ensure reliable, real-time data availability for agentic AI and automation solutions. • Support data quality, validation, and profiling processes. ________________________________________ Key Skills & Technology Areas: • Programming & Data Processing: Python (4–6 years), PySpark, Pandas, NumPy • Data Engineering & Pipelines: Apache Airflow, AWS Glue, Azure Data Factory, Databricks • Cloud Platforms: AWS (S3, Lambda, Glue), Azure (ADF, Synapse), GCP (optional) • Databases: SQL/NoSQL, Postgres, DynamoDB, Vector databases (ChromaDB, Pinecone) – preferred • ML/GenAI Exposure (basic): Hands-on with Pandas, scikit-learn, knowledge of RAG pipelines and GenAI concepts • Data Modeling: Star/Snowflake schema, data normalization, dimensional modeling • Version Control & CI/CD: Git, Jenkins, or similar tools for pipeline deployment ________________________________________ Other Requirements: • Strong problem-solving and analytical skills • Flexible to work on fast-paced and cross-functional priorities • Experience collaborating with AI/ML or GenAI teams is a plus • Good communication and a collaborative, team-first mindset • Experience in Telecom, E- Commerce, or Enterprise IT Operations is a plus. Skills ETL,BIGDATA,PYSPARK,SQL About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.

Posted 1 week ago

Apply

15.0 years

0 Lacs

Hyderābād

On-site

Project Role : AI / ML Engineer Project Role Description : Develops applications and systems that utilize AI to improve performance and efficiency, including but not limited to deep learning, neural networks, chatbots, natural language processing. Must have skills : Google Cloud Machine Learning Services Good to have skills : Google Pub/Sub, GCP Dataflow, Google Dataproc Minimum 2 year(s) of experience is required Educational Qualification : 15 years full time education Summary: As an AI / ML Engineer, you will engage in the development of applications and systems that leverage artificial intelligence to enhance performance and efficiency. Your typical day will involve collaborating with cross-functional teams to design and implement innovative solutions, utilizing advanced technologies such as deep learning and natural language processing. You will also be responsible for analyzing data and refining algorithms to ensure optimal functionality and user experience, while continuously exploring new methodologies to drive improvements in AI applications. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Assist in the design and development of AI-driven applications to meet project requirements. - Collaborate with team members to troubleshoot and resolve technical challenges. Professional & Technical Skills: - Must To Have Skills: Proficiency in Google Cloud Machine Learning Services. - Good To Have Skills: Experience with GCP Dataflow, Google Pub/Sub, Google Dataproc. - Strong understanding of machine learning frameworks and libraries. - Experience in deploying machine learning models in cloud environments. - Familiarity with data preprocessing and feature engineering techniques. Additional Information: - The candidate should have minimum 2 years of experience in Google Cloud Machine Learning Services. - This position is based at our Hyderabad office. - A 15 years full time education is required. 15 years full time education

Posted 1 week ago

Apply

3.0 years

7 - 8 Lacs

Delhi

On-site

Job Title: Nursing Assistant (Male/Female) Location: Oman Joining: Immediate (within 15 days) Salary: Up to OMR 300/month Key Responsibilities: Provide direct patient care under the supervision of Registered Nurses. Assist patients with daily living activities including hygiene, feeding, and mobility. Take and record vital signs, monitor patient conditions, and report changes. Ensure patient comfort and safety at all times. Support clinical staff in carrying out medical procedures and routine tasks. Maintain accurate documentation and patient records. Follow infection control and hygiene protocols strictly. Requirements: Qualification: GNM (General Nursing and Midwifery) or BSc in Nursing. Experience: Minimum 3 years of relevant hospital/clinical experience. Mandatory: Positive Dataflow report. Gender: Male candidates only. Readiness to join within 15 days. Benefits: Competitive salary up to OMR 300/- Accommodation and transportation as per company norms. Medical insurance and other statutory benefits provided. Other benefits: Free Joining Ticket (Will be reimbursed after the 3 months Probation period) 30 Days paid Annual leave after 1 year of service completion Yearly Up and Down Air Ticket Medical Insurance Life Insurance Accommodation (Chargeable upto OMR 20/-) Note: This is an urgent requirement . Only candidates who can join immediately or within 15 days and have a Positive Dataflow report will be considered. Job Types: Full-time, Permanent Pay: ₹66,000.00 - ₹70,000.00 per month Benefits: Cell phone reimbursement Health insurance Internet reimbursement Leave encashment Life insurance Paid sick time Paid time off Provident Fund Schedule: Monday to Friday Rotational shift Supplemental Pay: Joining bonus Overtime pay Performance bonus Yearly bonus Experience: Nursing: 3 years (Required) Positive dataflow reports: 2 years (Required) Work Location: In person

Posted 1 week ago

Apply

0 years

3 - 5 Lacs

Chennai

On-site

Good knowledge in GCP, BigQuery, SQL Server, Postgres DB Knowledge in Datastream, Cloud Dataflow, Terraform, ETL tool, Writing procedures and functions ,Writing dynamic code , Performance tuning and complex queries , UNIX. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.

Posted 1 week ago

Apply

0 years

0 Lacs

Andhra Pradesh

On-site

Design and develop robust ETL pipelines using Python, PySpark, and GCP services. Build and optimize data models and queries in BigQuery for analytics and reporting. Ingest, transform, and load structured and semi-structured data from various sources. Collaborate with data analysts, scientists, and business teams to understand data requirements. Ensure data quality, integrity, and security across cloud-based data platforms. Monitor and troubleshoot data workflows and performance issues. Automate data validation and transformation processes using scripting and orchestration tools. Required Skills & Qualifications: Hands-on experience with Google Cloud Platform (GCP), especially BigQuery. Strong programming skills in Python and/or PySpark. Experience in designing and implementing ETL workflows and data pipelines. Proficiency in SQL and data modeling for analytics. Familiarity with GCP services such as Cloud Storage, Dataflow, Pub/Sub, and Composer. Understanding of data governance, security, and compliance in cloud environments. Experience with version control (Git) and agile development practices. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.

Posted 1 week ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

Remote

Job Title: Senior Data Engineer with GCP Location: Gurgaon, Haryana (Hybrid/Remote options available) Experience: 5+ years Employment Type: Full-time About the Role We are seeking a highly skilled and motivated Senior Data Engineer with hands-on experience across GCP data ecosystems. The ideal candidate will be responsible for designing, building, and maintaining scalable data pipelines and architectures that support advanced analytics and real-time data processing. Key Responsibilities Technical Responsibilities Data Pipeline Development : Design and implement robust ETL/ELT pipelines using cloud-native tools. Cloud Expertise : GCP : Cloud Dataproc, Dataflow, Composer, Stream Analytics Data Modeling : Develop and optimize data models for analytics and reporting. Data Governance : Ensure data quality, security, and compliance across platforms. Automation & Orchestration : Use tools like Apache Airflow, AWS Step Functions, and GCP Composer for workflow orchestration. Monitoring & Optimization : Implement monitoring, logging, and performance tuning for data pipelines. Collaboration & Communication Work closely with data scientists, analysts, and business stakeholders to understand data needs. Translate business requirements into scalable technical solutions. Participate in code reviews, architecture discussions, and agile ceremonies. Required Qualifications Technical Skills Strong programming skills in Python , SQL , and optionally Scala or Java . Deep understanding of distributed computing , data warehousing , and stream processing . Experience with data lake architectures , data mesh , and real-time analytics . Proficiency in CI/CD practices and infrastructure as code (e.g., Terraform, CloudFormation). Certifications (Preferred) Google Professional Data Engineer Soft Skills & Attributes Analytical Thinking : Ability to break down complex problems and design scalable solutions. Communication : Strong verbal and written communication skills to explain technical concepts to non-technical stakeholders. Collaboration : Team player with a proactive attitude and the ability to work in cross-functional teams. Adaptability : Comfortable working in a fast-paced, evolving environment with shifting priorities. Ownership : High sense of accountability and a drive to deliver high-quality solutions.

Posted 1 week ago

Apply

10.0 - 14.0 years

0 Lacs

chennai, tamil nadu

On-site

You will be joining as a GCP Data Architect at TechMango, a rapidly growing IT Services and SaaS Product company located in Madurai and Chennai. With over 12 years of experience, you are expected to start immediately and work from the office. TechMango specializes in assisting global businesses with digital transformation, modern data platforms, product engineering, and cloud-first initiatives. In this role, you will be leading data modernization efforts for a prestigious client, Livingston, in a highly strategic project. As a GCP Data Architect, your primary responsibility will be to design and implement scalable, high-performance data solutions on Google Cloud Platform. You will collaborate closely with stakeholders to define data architecture, implement data pipelines, modernize legacy data systems, and guide data strategy aligned with enterprise goals. Key Responsibilities: - Lead end-to-end design and implementation of scalable data architecture on Google Cloud Platform (GCP) - Define data strategy, standards, and best practices for cloud data engineering and analytics - Develop data ingestion pipelines using Dataflow, Pub/Sub, Apache Beam, Cloud Composer (Airflow), and BigQuery - Migrate on-prem or legacy systems to GCP (e.g., from Hadoop, Teradata, or Oracle to BigQuery) - Architect data lakes, warehouses, and real-time data platforms - Ensure data governance, security, lineage, and compliance (using tools like Data Catalog, IAM, DLP) - Guide a team of data engineers and collaborate with business stakeholders, data scientists, and product managers - Create documentation, high-level design (HLD) and low-level design (LLD), and oversee development standards - Provide technical leadership in architectural decisions and future-proofing the data ecosystem Required Skills & Qualifications: - 10+ years of experience in data architecture, data engineering, or enterprise data platforms - Minimum 3-5 years of hands-on experience in GCP Data Service - Proficient in: BigQuery, Cloud Storage, Dataflow, Pub/Sub, Composer, Cloud SQL/Spanner - Python / Java / SQL - Data modeling (OLTP, OLAP, Star/Snowflake schema) - Experience with real-time data processing, streaming architectures, and batch ETL pipelines - Good understanding of IAM, networking, security models, and cost optimization on GCP - Prior experience in leading cloud data transformation projects - Excellent communication and stakeholder management skills Preferred Qualifications: - GCP Professional Data Engineer / Architect Certification - Experience with Terraform, CI/CD, GitOps, Looker / Data Studio / Tableau for analytics - Exposure to AI/ML use cases and MLOps on GCP - Experience working in agile environments and client-facing roles What We Offer: - Opportunity to work on large-scale data modernization projects with global clients - A fast-growing company with a strong tech and people culture - Competitive salary, benefits, and flexibility - Collaborative environment that values innovation and leadership,

Posted 1 week ago

Apply

6.0 - 10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

At PwC, our people in business application consulting specialise in consulting services for a variety of business applications, helping clients optimise operational efficiency. These individuals analyse client needs, implement software solutions, and provide training and support for seamless integration and utilisation of business applications, enabling clients to achieve their strategic objectives. Those in Oracle technology at PwC will focus on utilising and managing Oracle suite of software and technologies for various purposes within an organisation. You will be responsible for tasks such as installation, configuration, administration, development, and support of Oracle products and solutions. Focused on relationships, you are building meaningful client connections, and learning how to manage and inspire others. Navigating increasingly complex situations, you are growing your personal brand, deepening technical expertise and awareness of your strengths. You are expected to anticipate the needs of your teams and clients, and to deliver quality. Embracing increased ambiguity, you are comfortable when the path forward isn’t clear, you ask questions, and you use these moments as opportunities to grow. Skills Examples of the skills, knowledge, and experiences you need to lead and deliver value at this level include but are not limited to: Respond effectively to the diverse perspectives, needs, and feelings of others. Use a broad range of tools, methodologies and techniques to generate new ideas and solve problems. Use critical thinking to break down complex concepts. Understand the broader objectives of your project or role and how your work fits into the overall strategy. Develop a deeper understanding of the business context and how it is changing. Use reflection to develop self awareness, enhance strengths and address development areas. Interpret data to inform insights and recommendations. Uphold and reinforce professional and technical standards (e.g. refer to specific PwC tax and audit guidance), the Firm's code of conduct, and independence requirements. Role / Job Title Senior Associate Tower Oracle Experience 6 - 10 years Key Skills Oracle Fusion PPM - Project Billing / Project Costing, Fixed Assets, Integration with Finance Modules, BPM Workflow and OTBI Reports Educational Qualification BE / B Tech / ME / M Tech / MBA / B.SC / B.Com / BBA Work Location India Job Description 5 ~ 9 year of experience of Oracle Fusion Applications, specifically PPM Costing, Billing and Project Resource Mana Cloud gement / Project Management and Oracle Cloud Fixed Assets Should have completed minimum two end-to-end implementations in Fusion PPM modules, upgradation, lift and shift and support projects experience and integration of Fixed Assets Experience in Oracle Cloud / Fusion PPM Functional modules and Fixed Assets along with integration with all finance and SCM modules Should be able to understand and articulate business requirements and propose solutions after performing appropriate due diligence Good knowledge of BPM Approval Workflow Solid understanding of Enterprise Structures, Hierarchies, FlexFields, Extensions setup in Fusion Project Foundations and Subledger Accounting Experience in working with Oracle Support for various issue resolutions Exposure perform Unit Testing and UAT of issues and collaborate with the business users to obtain UAT sign-off Quarterly Release Testing, preparation of release notes and presenting the new features Worked on Transition Management Experience in working with various financials data upload / migration techniques like FBDI / ADFDI and related issue resolutions Experience in supporting period end closure activities independently Experience in reconciliation of financial data between GL and subledger modules High level knowledge of end-to-end integration of Financial Modules with other modules like Projects, Procurement / Order Management and HCM Fair knowledge of other Fusion modules like SCM or PPM functionality is a plus Generate adhoc reports to measure and to communicate the health of the applications Focus on reducing recurrence issues caused by the Oracle Fusion application Prepare process flows, dataflow dgiagrams, requirement documents, user training and onboarding documents to support upcoming projects and enhancements Deliver and track the delivery of issue resolutions to meet the SLA’s and KPI’s Should have good communication, presentation, analytical and problem-solving skills Coordinate with team to close the client requests on time and within SLA Should be able to independently conduct CRP, UAT and SIT sessions with the clients / stakeholders Should be able to manage the Oracle Fusion PPM Track independently, interact with clients, conduct business requirement meetings and user training sessions Managed Services - Application Evolution Services At PwC we relentlessly focus on working with our clients to bring the power of technology and humans together and create simple, yet powerful solutions. We imagine a day when our clients can simply focus on their business knowing that they have a trusted partner for their IT needs. Everyday we are motivated and passionate about making our clients’ better. Within our Managed Services platform, PwC delivers integrated services and solutions that are grounded in deep industry experience and powered by the talent that you would expect from the PwC brand. The PwC Managed Services platform delivers scalable solutions that add greater value to our client’s enterprise through technology and human-enabled experiences. Our team of highly-skilled and trained global professionals, combined with the use of the latest advancements in technology and process, allows us to provide effective and efficient outcomes. With PwC’s Managed Services our client’s are able to focus on accelerating their priorities, including optimizing operations and accelerating outcomes. PwC brings a consultative first approach to operations, leveraging our deep industry insights combined with world class talent and assets to enable transformational journeys that drive sustained client outcomes. Our clients need flexible access to world class business and technology capabilities that keep pace with today’s dynamic business environment. Within our global, Managed Services platform, we provide Application Evolution Services (formerly Application Managed Services), where we focus more so on the evolution of our clients’ applications and cloud portfolio. Our focus is to empower our client’s to navigate and capture the value of their application portfolio while cost-effectively operating and protecting their solutions. We do this so that our clients can focus on what matters most to your business: accelerating growth that is dynamic, efficient and cost-effective. As a member of our Application Evolution Services (AES) team, we are looking for candidates who thrive working in a high-paced work environment capable of working on a mix of critical Application Evolution Service offerings and engagement including help desk support, enhancement and optimization work, as well as strategic roadmap and advisory level work. It will also be key to lend experience and effort in helping win and support customer engagements from not only a technical perspective, but also a relationship perspective

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

About Us We are a fast-growing Direct-to-Consumer (D2C) company revolutionizing how customers interact with our products. Our data-driven approach is at the core of our business strategy, enabling us to make informed decisions that enhance customer experience and drive business growth. We're looking for a talented Senior Data Engineer to join our team and help shape our data infrastructure for the future. Role Overview As a Senior Data Engineer, you will architect, build, and maintain our data infrastructure that powers critical business decisions. You will work closely with data scientists, analysts, and product teams to design and implement scalable solutions for data processing, storage, and retrieval. Your work will directly impact our ability to leverage data for business intelligence, machine learning initiatives, and customer insights. Key Responsibilities Design, build, and maintain our end-to-end data infrastructure on AWS and GCP cloud platforms Develop and optimize ETL/ELT pipelines to process large volumes of data from multiple sources Build and support data pipelines for reporting, analytics, and machine learning applications Implement and manage streaming data solutions using Kafka and other technologies Design and optimize database schemas and data models in ClickHouse and other databases Develop and maintain data workflows using Apache Airflow and similar orchestration tools Write efficient, maintainable, and scalable code using PySpark and other data processing frameworks Collaborate with data scientists to implement ML infrastructure for model training and deployment Ensure data quality, reliability, and security across all data platforms Monitor data pipelines and implement proactive alerting systems Troubleshoot and resolve data infrastructure issues Document data flows, architectures, and processes Mentor junior data engineers and contribute to establishing best practices Stay current with industry trends and emerging technologies in data engineering Qualifications Required : Bachelor's degree in Computer Science, Engineering, or related technical field (Master's preferred) 5+ years of experience in data engineering roles Strong expertise in AWS and/or GCP cloud platforms and services Proficiency in building data pipelines using modern ETL/ELT tools and frameworks Experience with stream processing technologies such as Kafka Hands-on experience with ClickHouse or similar analytical databases Strong programming skills in Python and experience with PySpark Experience with workflow orchestration tools like Apache Airflow Solid understanding of data modeling, data warehousing concepts, and dimensional modeling Knowledge of SQL and NoSQL databases Strong problem-solving skills and attention to detail Excellent communication skills and ability to work in cross-functional teams Preferred Experience in D2C, e-commerce, or retail industries Knowledge of data visualization tools (Tableau, Looker, Power BI) Experience with real-time analytics solutions Familiarity with CI/CD practices for data pipelines Experience with containerization technologies (Docker, Kubernetes) Understanding of data governance and compliance requirements Experience with MLOps or ML engineering Technologies Cloud Platforms: AWS (S3, Redshift, EMR, Lambda), GCP (BigQuery, Dataflow, Dataproc) Data Processing: Apache Spark, PySpark, Python, SQL Streaming: Apache Kafka, Kinesis Data Storage: ClickHouse, S3, BigQuery, PostgreSQL, MongoDB Orchestration: Apache Airflow Version Control: Git Containerization: Docker, Kubernetes (optional) What We Offer Competitive salary and comprehensive benefits package Opportunity to work with cutting-edge data technologies Professional development and learning opportunities Modern office in Mumbai with great amenities Collaborative and innovation-driven culture Opportunity to make a significant impact on company growth (ref:hirist.tech)

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Overview: TekWissen is a global workforce management provider throughout India and many other countries in the world. The below clientis a global company with shared ideals and a deep sense of family. From our earliest days as a pioneer of modern transportation, we have sought to make the world a better place – one that benefits lives, communities and the planet Job Title: Software Engineer Practitioner Location: Chennai Work Type: Hybrid Position Description: We're seeking a highly skilled and experienced Full Stack Data Engineer to play a pivotal role in the development and maintenance of our Enterprise Data Platform. In this role, you'll be responsible for designing, building, and optimizing scalable data pipelines within our Google Cloud Platform (GCP) environment. You'll work with GCP Native technologies like BigQuery, Dataform,Dataflow, and Pub/Sub, ensuring data governance, security, and optimal performance. This is a fantastic opportunity to leverage your full-stack expertise, collaborate with talented teams, and establish best practices for data engineering at client. Basic Qualifications: Bachelors or Masters degree in a Computer Science, Engineering or a related or related field of study 5+ Years - Strong understating of Database concepts and experience with multiple database technologies optimizing query and data processing performance. 5+ Years - Full Stack Data Engineering Competency in a public cloud Google Critical thinking skills to propose data solutions, test, and make them a reality. 5+ Years - Highly Proficient in SQL, Python, Java- Experience programming engineering transformation in Python or a similar language. 5+ Years - Ability to work effectively across organizations, product teams and business partners. 5+ Years - Knowledge Agile (Scrum) Methodology, experience in writing user stories Deep understanding of data service ecosystems including data warehousing, lakes and Marts User experience advocacy through empathetic stakeholder relationship. Effective Communication both internally (with team members) and externally (with stakeholders) Knowledge of Data Warehouse concepts experience with Data Warehouse/ ETL processes Strong process discipline and thorough understating of IT processes (ISP, Data Security). Skills Required: Data Architecture, Data Warehousing, DataForm, Google Cloud Platform - Biq Query, Data Flow, Dataproc, Data Fusion, TERRAFORM, Tekton,Cloud SQL, AIRFLOW, POSTGRES, Airflow PySpark, Python, API Experience Required: Excellent communication, collaboration and influence skills; ability to energize a team. Knowledge of data, software and architecture operations, data engineering and data management standards, governance and quality Hands on experience in Python using libraries like NumPy, Pandas, etc. • Extensive knowledge and understanding of GCP offerings, bundled services, especially those associated with data operations Cloud Console, BigQuery, DataFlow, Dataform, PubSub Experience with recoding, re-developing and optimizing data operations, data science and analytical workflows and products. Experience Required: 5+ Years Education Required: Bachelor's Degree TekWissen® Group is an equal opportunity employer supporting workforce diversity.

Posted 2 weeks ago

Apply

0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Responsibilities This is a CONTRACT TO HIRE on-site role for a Data Engineer at Quilytics in Mumbai. The contract will be of 6 months with an opportunity to convert to full time role. As a Data Engineer, you will be responsible for data integration, data modeling, ETL (Extract Transform Load), data warehousing, data analytics, and ensuring data integrity and quality. You will be expected to understanding fundamentals of data flow and orchestrations and design and implement secure pipelines and datawarehouses. Maintaining data integrity and quality is of utmost importance. You will collaborate with the team to design, develop, and maintain data pipelines, data platforms using Cloud ecosystems like GCP, Azure, Snowflake etc. You will be responsible for creating and managing the end-to-end data pipeline using custom scripts in python, R language or any third party tools like Dataflow, Airflow, AWS Glue, Fivetran, Alteryx etc. The data pipelines built will be used for managing various operations from data acquisition, data storage to data transformation and visualization. You will also work closely with cross-functional teams to identify data-driven solutions to business problems and help clients make data-driven decisions. You will also be also expected to help build dashboards or any custom reports in Google sheets or Excel. Basic to mid level proficiency in creating and editing dashboards on at least one tool is a must. Qualifications 2+ of experience in using python language to perform Data Engineering, Data Modeling, Data Warehousing and Data Analytics and ETL (Extract Transform Load) Familiarity with GUI based ETL tools like Azure data factory, AWS Glue, Fivetran, Talend, Pentaho etc. for data integration and other data operations. Strong programming skills in SQL, and/ or R. Python. This is a must-have skill. Experience in designing and implementing data pipelines and data platforms in cloud and on-premise systems Basic to mid level proficiency in data visualization on any of the industry accepted tools like Power BI, Looker studio or Tableau is a plus. Understanding of data integration and data governance principles Knowledge of cloud platforms such as Snowflake, AWS or Azure Excellent analytical and problem-solving skills and good communication and interpersonal skills Bachelor's or Master's degree in Data Science, Computer Science, or a related field

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Role Description Role Proficiency: Leverage expertise in a technology area (e.g. Infromatica Transformation Terradata data warehouse Hadoop Analytics) Responsible for Architecture for a small/mid-size projects. Outcomes Implement either data extract and transformation a data warehouse (ETL Data Extracts Data Load Logic Mapping Work Flows stored procedures data warehouse) data analysis solution data reporting solutions or cloud data tools in any one of the cloud providers(AWS/AZURE/GCP) Understand business workflows and related data flows. Develop design for data acquisitions and data transformation or data modelling; applying business intelligence on data or design data fetching and dashboards Design information structure work-and dataflow navigation. Define backup recovery and security specifications Enforce and maintain naming standards and data dictionary for data models Provide or guide team to perform estimates Help team to develop proof of concepts (POC) and solution relevant to customer problems. Able to trouble shoot problems while developing POCs Architect/Big Data Speciality Certification in (AWS/AZURE/GCP/General for example Coursera or similar learning platform/Any ML) Measures Of Outcomes Percentage of billable time spent in a year for developing and implementing data transformation or data storage Number of best practices documented in any new tool and technology emerging in the market Number of associates trained on the data service practice Outputs Expected Strategy & Planning: Create or contribute short-term tactical solutions to achieve long-term objectives and an overall data management roadmap Implement methods and procedures for tracking data quality completeness redundancy and improvement Ensure that data strategies and architectures meet regulatory compliance requirements Begin engaging external stakeholders including standards organizations regulatory bodies operators and scientific research communities or attend conferences with respect to data in cloud Operational Management Help Architects to establish governance stewardship and frameworks for managing data across the organization Provide support in implementing the appropriate tools software applications and systems to support data technology goals Collaborate with project managers and business teams for all projects involving enterprise data Analyse data-related issues with systems integration compatibility and multi-platform integration Project Control And Review Provide advice to teams facing complex technical issues in the course of project delivery Define and measure project and program specific architectural and technology quality metrics Knowledge Management & Capability Development Publish and maintain a repository of solutions best practices and standards and other knowledge articles for data management Conduct and facilitate knowledge sharing and learning sessions across the team Gain industry standard certifications on technology or area of expertise Support technical skill building (including hiring and training) for the team based on inputs from project manager /RTE’s Mentor new members in the team in technical areas Gain and cultivate domain expertise to provide best and optimized solution to customer (delivery) Requirement Gathering And Analysis Work with customer business owners and other teams to collect analyze and understand the requirements including NFRs/define NFRs Analyze gaps/ trade-offs based on current system context and industry practices; clarify the requirements by working with the customer Define the systems and sub-systems that define the programs People Management Set goals and manage performance of team engineers Provide career guidance to technical specialists and mentor them Alliance Management Identify alliance partners based on the understanding of service offerings and client requirements In collaboration with Architect create a compelling business case around the offerings Conduct beta testing of the offerings and relevance to program Technology Consulting In collaboration with Architects II and III analyze the application and technology landscapers process and tolls to arrive at the architecture options best fit for the client program Analyze Cost Vs Benefits of solution options Support Architects II and III to create a technology/ architecture roadmap for the client Define Architecture strategy for the program Innovation And Thought Leadership Participate in internal and external forums (seminars paper presentation etc) Understand clients existing business at the program level and explore new avenues to save cost and bring process efficiency Identify business opportunities to create reusable components/accelerators and reuse existing components and best practices Project Management Support Assist the PM/Scrum Master/Program Manager to identify technical risks and come-up with mitigation strategies Stakeholder Management Monitor the concerns of internal stakeholders like Product Managers & RTE’s and external stakeholders like client architects on Architecture aspects. Follow through on commitments to achieve timely resolution of issues Conduct initiatives to meet client expectations Work to expand professional network in the client organization at team and program levels New Service Design Identify potential opportunities for new service offerings based on customer voice/ partner inputs Conduct beta testing / POC as applicable Develop collaterals guides for GTM Skill Examples Use data services knowledge creating POC to meet a business requirements; contextualize the solution to the industry under guidance of Architects Use technology knowledge to create Proof of Concept (POC) / (reusable) assets under the guidance of the specialist. Apply best practices in own area of work helping with performance troubleshooting and other complex troubleshooting. Define decide and defend the technology choices made review solution under guidance Use knowledge of technology t rends to provide inputs on potential areas of opportunity for UST Use independent knowledge of Design Patterns Tools and Principles to create high level design for the given requirements. Evaluate multiple design options and choose the appropriate options for best possible trade-offs. Conduct knowledge sessions to enhance team's design capabilities. Review the low and high level design created by Specialists for efficiency (consumption of hardware memory and memory leaks etc.) Use knowledge of Software Development Process Tools & Techniques to identify and assess incremental improvements for software development process methodology and tools. Take technical responsibility for all stages in the software development process. Conduct optimal coding with clear understanding of memory leakage and related impact. Implement global standards and guidelines relevant to programming and development come up with 'points of view' and new technological ideas Use knowledge of Project Management & Agile Tools and Techniques to support plan and manage medium size projects/programs as defined within UST; identifying risks and mitigation strategies Use knowledge of Project Metrics to understand relevance in project. Collect and collate project metrics and share with the relevant stakeholders Use knowledge of Estimation and Resource Planning to create estimate and plan resources for specific modules or small projects with detailed requirements or user stories in place Strong proficiencies in understanding data workflows and dataflow Attention to details High analytical capabilities Knowledge Examples Data visualization Data migration RDMSs (relational database management systems SQL Hadoop technologies like MapReduce Hive and Pig. Programming languages especially Python and Java Operating systems like UNIX and MS Windows. Backup/archival software. Additional Comments AI Architect Role Summary: Hands-on AI Architect with strong expertise in Deep Learning, Generative AI, and real-world AI/ML systems. The role involves leading the architecture, development, and deployment of AI agent-based solutions, supporting initiatives such as intelligent automation, anomaly detection, and GenAI-powered assistants across enterprise operations and engineering. This is a hands-on role ideal for someone who thrives in fast-paced environments, is passionate about AI innovations, and can adapt across multiple opportunities based on business priorities. Key Responsibilities: Design and architect AI-based solutions including multi-agent GenAI systems using LLMs and RAG pipelines. Build POCs, prototypes, and production-grade AI components for operations, support automation, and intelligent assistants. Lead end-to-end development of AI agents for use cases such as triage, RCA automation, and predictive analytics. Leverage GenAI (LLMs) and Time Series models to drive intelligent observability and performance management. Work closely with product, engineering, and operations teams to align solutions with domain and customer needs. Own model lifecycle from experimentation to deployment using modern MLOps and LLMOps practices. Ensure scalable, secure, and cost-efficient implementation across AWS and Azure cloud environments. Key Skills & Technology Areas: AI/ML Expertise: 8+ years in AI/ML, with hands-on experience in deep learning, model deployment, and GenAI. LLMs & Frameworks: GPT-3+, Claude, LLAMA3, LangChain, LangGraph, Transformers (BERT, T5), RAG pipelines, LLMOps. Programming: Python (advanced), Keras, PyTorch, Pandas, FastAPI, Celery (for agent orchestration), Redis. Modeling & Analytics: Time Series Forecasting, Predictive Modeling, Synthetic Data Generation. Data & Storage: ChromaDB, Pinecone, FAISS, DynamoDB, PostgreSQL, Azure Synapse, Azure Data Factory. Cloud & Tools: o AWS (Bedrock, SageMaker, Lambda), o Azure (Azure ML, Azure Databricks, Synapse), o GCP (Vertex AI – optional) Observability Integration: Splunk, ELK Stack, Prometheus. DevOps/MLOps: Docker, GitHub Actions, Kubernetes, CI/CD pipelines, model monitoring & versioning. Architectural Patterns: Microservices, Event-Driven Architecture, Multi-Agent Systems, API-first Design. Other Requirements: Proven ability to work independently and collaboratively in agile, innovation-driven teams. Strong problem-solving mindset and product-oriented thinking. Excellent communication and technical storytelling skills. Flexibility to work across multiple opportunities based on business priorities. Experience in Telecom, E- Commerce, or Enterprise IT Operations is a plus. ________________________________________ ________________________________________ ________________________________________ Skills python,pandas,AIML,GENAI

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Role Description Role Proficiency: This role requires proficiency in data pipeline development including coding and testing data pipelines for ingesting wrangling transforming and joining data from various sources. Must be skilled in ETL tools such as Informatica Glue Databricks and DataProc with coding expertise in Python PySpark and SQL. Works independently and has a deep understanding of data warehousing solutions including Snowflake BigQuery Lakehouse and Delta Lake. Capable of calculating costs and understanding performance issues related to data solutions. Outcomes Act creatively to develop pipelines and applications by selecting appropriate technical options optimizing application development maintenance and performance using design patterns and reusing proven solutions.rnInterpret requirements to create optimal architecture and design developing solutions in accordance with specifications. Document and communicate milestones/stages for end-to-end delivery. Code adhering to best coding standards debug and test solutions to deliver best-in-class quality. Perform performance tuning of code and align it with the appropriate infrastructure to optimize efficiency. Validate results with user representatives integrating the overall solution seamlessly. Develop and manage data storage solutions including relational databases NoSQL databases and data lakes. Stay updated on the latest trends and best practices in data engineering cloud technologies and big data tools. Influence and improve customer satisfaction through effective data solutions. Measures Of Outcomes Adherence to engineering processes and standards Adherence to schedule / timelines Adhere to SLAs where applicable # of defects post delivery # of non-compliance issues Reduction of reoccurrence of known defects Quickly turnaround production bugs Completion of applicable technical/domain certifications Completion of all mandatory training requirements Efficiency improvements in data pipelines (e.g. reduced resource consumption faster run times). Average time to detect respond to and resolve pipeline failures or data issues. Number of data security incidents or compliance breaches. Outputs Expected Code Development: Develop data processing code independently ensuring it meets performance and scalability requirements. Define coding standards templates and checklists. Review code for team members and peers. Documentation Create and review templates checklists guidelines and standards for design processes and development. Create and review deliverable documents including design documents architecture documents infrastructure costing business requirements source-target mappings test cases and results. Configuration Define and govern the configuration management plan. Ensure compliance within the team. Testing Review and create unit test cases scenarios and execution plans. Review the test plan and test strategy developed by the testing team. Provide clarifications and support to the testing team as needed. Domain Relevance Advise data engineers on the design and development of features and components demonstrating a deeper understanding of business needs. Learn about customer domains to identify opportunities for value addition. Complete relevant domain certifications to enhance expertise. Project Management Manage the delivery of modules effectively. Defect Management Perform root cause analysis (RCA) and mitigation of defects. Identify defect trends and take proactive measures to improve quality. Estimation Create and provide input for effort and size estimation for projects. Knowledge Management Consume and contribute to project-related documents SharePoint libraries and client universities. Review reusable documents created by the team. Release Management Execute and monitor the release process to ensure smooth transitions. Design Contribution Contribute to the creation of high-level design (HLD) low-level design (LLD) and system architecture for applications business components and data models. Customer Interface Clarify requirements and provide guidance to the development team. Present design options to customers and conduct product demonstrations. Team Management Set FAST goals and provide constructive feedback. Understand team members' aspirations and provide guidance and opportunities for growth. Ensure team engagement in projects and initiatives. Certifications Obtain relevant domain and technology certifications to stay competitive and informed. Skill Examples Proficiency in SQL Python or other programming languages used for data manipulation. Experience with ETL tools such as Apache Airflow Talend Informatica AWS Glue Dataproc and Azure ADF. Hands-on experience with cloud platforms like AWS Azure or Google Cloud particularly with data-related services (e.g. AWS Glue BigQuery). Conduct tests on data pipelines and evaluate results against data quality and performance specifications. Experience in performance tuning of data processes. Expertise in designing and optimizing data warehouses for cost efficiency. Ability to apply and optimize data models for efficient storage retrieval and processing of large datasets. Capacity to clearly explain and communicate design and development aspects to customers. Ability to estimate time and resource requirements for developing and debugging features or components. Knowledge Examples Knowledge Examples Knowledge of various ETL services offered by cloud providers including Apache PySpark AWS Glue GCP DataProc/DataFlow Azure ADF and ADLF. Proficiency in SQL for analytics including windowing functions. Understanding of data schemas and models relevant to various business contexts. Familiarity with domain-related data and its implications. Expertise in data warehousing optimization techniques. Knowledge of data security concepts and best practices. Familiarity with design patterns and frameworks in data engineering. Additional Comments Data Engineering Role Summary: Skilled Data Engineer with strong Python programming skills and experience in building scalable data pipelines across cloud environments. The candidate should have a good understanding of ML pipelines and basic exposure to GenAI solutioning. This role will support large-scale AI/ML and GenAI initiatives by ensuring high-quality, contextual, and real-time data availability. ________________________________________ Key Responsibilities: Design, build, and maintain robust, scalable ETL/ELT data pipelines in AWS/Azure environments. Develop and optimize data workflows using PySpark, SQL, and Airflow. Work closely with AI/ML teams to support training pipelines and GenAI solution deployments. Integrate data with vector databases like ChromaDB or Pinecone for RAG-based pipelines. Collaborate with solution architects and GenAI leads to ensure reliable, real-time data availability for agentic AI and automation solutions. Support data quality, validation, and profiling processes. ________________________________________ Key Skills & Technology Areas: Programming & Data Processing: Python (4–6 years), PySpark, Pandas, NumPy Data Engineering & Pipelines: Apache Airflow, AWS Glue, Azure Data Factory, Databricks Cloud Platforms: AWS (S3, Lambda, Glue), Azure (ADF, Synapse), GCP (optional) Databases: SQL/NoSQL, Postgres, DynamoDB, Vector databases (ChromaDB, Pinecone) – preferred ML/GenAI Exposure (basic): Hands-on with Pandas, scikit-learn, knowledge of RAG pipelines and GenAI concepts Data Modeling: Star/Snowflake schema, data normalization, dimensional modeling Version Control & CI/CD: Git, Jenkins, or similar tools for pipeline deployment ________________________________________ Other Requirements: Strong problem-solving and analytical skills Flexible to work on fast-paced and cross-functional priorities Experience collaborating with AI/ML or GenAI teams is a plus Good communication and a collaborative, team-first mindset Experience in Telecom, E- Commerce, or Enterprise IT Operations is a plus. Skills ETL,BIGDATA,PYSPARK,SQL

Posted 2 weeks ago

Apply

10.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Company Description Omio’s vision is to enable people to travel seamlessly anywhere, anyway. We are bringing all global transport into a single distribution system and creating end-to-end magical consumer journeys. 1 billion users use Omio, doing over a billion searches a year. With Omio you can compare and book trains, buses, ferries and flights globally, offering transparent pricing and easy booking, Omio makes travel planning simple, flexible, and personal. Omio is available in 45 countries, 32 languages, 33 currencies, and collaborating with over 2,300 providers to offer millions of unique journeys and bookable travel modes. With 12,000 local transport operators and over 10 million unique routes searched each year, and 240 searchable countries, including our discovery product "Rome2Rio", which helps trip planners coordinate their travel anywhere in the world. Our offices are based in Berlin, Prague, Melbourne, Brazil, Bangalore, and London. We are a growing team of more than 430 passionate employees from more than 50 countries who share the same vision: to create a single tool to help send travellers almost anywhere in the world. Job Description Job Description As a Principal Software Engineer, you will design, implement and evangelise for cross-team projects that both have a company-wide business impact and at the same time, pushing our technical status-quo forward. We expect you to be mindful, approachable, experienced in a wide range of technologies and an expert in designing & building highly available distributed systems. If this excites you instead of scaring you, you will fit nicely! Our Technologies Java, NodeJS Python, Go Kubernetes, Docker, Jenkins, Terraform Google Cloud Platform (Pub/Sub, GCS, BigQuery, BigTable, Dataflow, ...), AWS (RedShift, Kinesis, S3) MySQL, PostgreSQL, Couchbase, Clickhouse Check more details: https://omio.tech/radar Qualifications Qualifications Who you are Passionate about technology but also interested in the business impact of projects you work on Able to gather requirements, create a delivery plan and handle alignment between different teams/departments to deliver your project, you will have end-to-end responsibility. Able to lead internal discussions about the direction of major areas of technology in Omio Able to mentor & inspire engineers across the organisation As a thoughtful leader and evangelist, you work out architecture, engineering best practices and solutions across the Tech department. We expect you to contribute to both internal and external tech-talks representing Omio You provide full and detailed analysis, insightful comments and recommendations for action. You contribute to the development of the tribe strategy across the organisation Your Skill Set 10+ years in Software Engineering position Deep knowledge in the JVM environment, large-scale distributed systems and cloud solutions Solid communication competences to transport vision, align teams and to advise the management team, good English language skills Experience in new product development and project management Demonstrated ability to interact and collaborate with all levels of internal and external customers Proactive problem solving - you resolve obstacles before they can become problems Method expert, ability to quickly interpret an extensive variety of technical information and find resolution and the method to an issue quickly. Additional Information Learn more about Omio Engineering and our Team: https://medium.com/omio-engineering Here at Omio, we know that no two people are alike, and that’s a great thing. Diversity in culture, thought and background has been key to growing our product beyond borders to reach millions of users from all over the world. That’s why we believe in giving equal opportunity to all, regardless of race, gender, religion, sexual orientation, age, or disability. Hiring process and background checks At Omio, we work in partnership with Giant Screening, once a job offer has been accepted, Giant will be engaged to carry out background screening. Giant will reach out to you via email and occasionally via telephone/text message so that they can gather all relevant information required. Consent will be requested prior to any information being passed to our services company. What’s in it for you? A competitive and attractive compensation package Opportunity to develop your skills on a new level A generous pension scheme A diverse team of more than 45 nationality Develop maintainable solutions for complex problems with broad impact on the business as a whole Make decisions that will have a direct impact on the long-term success of Omio Diversity makes us stronger We value diversity and welcome all applicants regardless of ethnicity, religion, national origin, sexual orientation, gender, gender identity, age or disability.

Posted 2 weeks ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Location: Bengaluru (Hybrid) Role Summary We’re seeking a skilled Data Scientist with deep expertise in recommender systems to design and deploy scalable personalization solutions. This role blends research, experimentation, and production-level implementation, with a focus on content-based and multi-modal recommendations using deep learning and cloud-native tools. Responsibilities Research, prototype, and implement recommendation models: two-tower, multi-tower, cross-encoder architectures Utilize text/image embeddings (CLIP, ViT, BERT) for content-based retrieval and matching Conduct semantic similarity analysis and deploy vector-based retrieval systems (FAISS, Qdrant, ScaNN) Perform large-scale data prep and feature engineering with Spark/PySpark and Dataproc Build ML pipelines using Vertex AI, Kubeflow, and orchestration on GKE Evaluate models using recommender metrics (nDCG, Recall@K, HitRate, MAP) and offline frameworks Drive model performance through A/B testing and real-time serving via Cloud Run or Vertex AI Address cold-start challenges with metadata and multi-modal input Collaborate with engineering for CI/CD, monitoring, and embedding lifecycle management Stay current with trends in LLM-powered ranking, hybrid retrieval, and personalization Required Skills Python proficiency with pandas, polars, numpy, scikit-learn, TensorFlow, PyTorch, transformers Hands-on experience with deep learning frameworks for recommender systems Solid grounding in embedding retrieval strategies and approximate nearest neighbor search GCP-native workflows: Vertex AI, Dataproc, Dataflow, Pub/Sub, Cloud Functions, Cloud Run Strong foundation in semantic search, user modeling, and personalization techniques Familiarity with MLOps best practices—CI/CD, infrastructure automation, monitoring Experience deploying models in production using containerized environments and Kubernetes Nice to Have Ranking models knowledge: DLRM, XGBoost, LightGBM Multi-modal retrieval experience (text + image + tabular features) Exposure to LLM-powered personalization or hybrid recommendation systems Understanding of real-time model updates and streaming ingestion

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Job Title: GCP Data Engineer Location: Remote Experience Required: 8 Years Position Type: Freelance / Contract As a Senior Data Engineer with a focus on pipeline migration from SAS to Google Cloud Platform (GCP) technologies, you will tackle intricate problems and create value for our business by designing and deploying reliable, scalable solutions tailored to the company’s data landscape. You will lead the development of custom-built data pipelines on the GCP stack, ensuring seamless migration of existing SAS pipelines. Additionally, you will mentor junior engineers, define standards and best practices, and contribute to strategic planning for data initiatives. Responsibilities: ● Lead the design, development, and implementation of data pipelines on the GCP stack, with a focus on migrating existing pipelines from SAS to GCP technologies. ● Develop modular and reusable code to support complex ingestion frameworks, simplifying the process of loading data into data lakes or data warehouses from multiple sources. ● Mentor and guide junior engineers, providing technical oversight and fostering their professional growth. ● Work closely with analysts, architects, and business process owners to translate business requirements into robust technical solutions. ● Utilize your coding expertise in scripting languages (Python, SQL, PySpark) to extract, manipulate, and process data effectively. ● Leverage your expertise in various GCP technologies, including BigQuery, GCP Workflows, Dataflow, Cloud Scheduler, Secret Manager, Batch, Cloud Logging, Cloud SDK, Google Cloud Storage, IAM, and Vertex AI, to enhance data warehousing solutions. ● Lead efforts to maintain high standards of development practices, including technical design, solution development, systems configuration, testing, documentation, issue identification, and resolution, writing clean, modular, and sustainable code. ● Understand and implement CI/CD processes using tools like Pulumi, GitHub, Cloud Build, Cloud SDK, and Docker.

Posted 2 weeks ago

Apply

1.0 - 5.0 years

0 Lacs

karnataka

On-site

Capgemini Invent is the digital innovation, consulting, and transformation brand of the Capgemini Group, a global business line that combines market-leading expertise in strategy, technology, data science, and creative design to help CxOs envision and build what's next for their businesses. In this role, you should have developed/worked on at least one Gen AI project and have experience in data pipeline implementation with cloud providers such as AWS, Azure, or GCP. You should also be familiar with cloud storage, cloud database, cloud data warehousing, and Data lake solutions like Snowflake, BigQuery, AWS Redshift, ADLS, and S3. Additionally, a good understanding of cloud compute services, load balancing, identity management, authentication, and authorization in the cloud is essential. Your profile should include a good knowledge of infrastructure capacity sizing, costing of cloud services to drive optimized solution architecture, leading to optimal infra investment vs. performance and scaling. You should be able to contribute to making architectural choices using various cloud services and solution methodologies. Proficiency in programming using Python is required along with expertise in cloud DevOps practices such as infrastructure as code, CI/CD components, and automated deployments on the cloud. Understanding networking, security, design principles, and best practices in the cloud is also important. At Capgemini, we value flexible work arrangements to provide support for maintaining a healthy work-life balance. You will have opportunities for career growth through various career growth programs and diverse professions tailored to support you in exploring a world of opportunities. Additionally, you can equip yourself with valuable certifications in the latest technologies such as Generative AI. Capgemini is a global business and technology transformation partner with a rich heritage of over 55 years. We have a diverse team of 340,000 members in more than 50 countries, working together to accelerate the dual transition to a digital and sustainable world while creating tangible impact for enterprises and society. Trusted by clients to unlock the value of technology, we deliver end-to-end services and solutions leveraging strengths from strategy and design to engineering, fueled by market-leading capabilities in AI, cloud, and data, combined with deep industry expertise and partner ecosystem. Our global revenues in 2023 were reported at 22.5 billion.,

Posted 2 weeks ago

Apply

10.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Are you energized by solving complex technical challenges and building scalable systems that make a real impact? Does crafting clean, efficient code and mentoring a team of talented developers get you up in the morning? We are looking for a Senior Software Developer who thrives on collaboration and innovation to help us design and develop high-performance software solutions that power our platform and delight our users. About the Job: Are you passionate about building intelligent systems that transform how businesses work? Do you want to be part of something transformative, right here in Chennai? Vendasta is assembling a world-class AI team, and we’re looking for a Senior Software Developer to help build the backbone of an AI-first future. You’ll work with cutting-edge technologies like Google Cloud, Longchain, MCP, and TypeScript, alongside various AI model vendors, to shape a platform that empowers small and medium businesses (SMBs) with AI employees, digital workers that deliver real impact. And it’s not just for our customers, AI is deeply integrated into our internal developer workflows, with tools like Gemini, Cursor, ChatGPT, and Claude Code enhancing how we plan, build, and deploy software. Our teams follow a Scrum Agile methodology and are supported by a robust Site Reliability Engineering (SRE) group to ensure dependable, scalable operations. At Vendasta, our Research & Development department fosters a culture of experimentation, continuous learning, and developer growth. With a dynamic team of over 100 developers, we’re committed to building impactful solutions that help SMBs succeed in the digital economy. In this role, you’ll serve as a visible steward of engineering culture, providing technical leadership, encouraging innovation, and enabling team success through clarity, collaboration, and proactive planning. You’ll also be instrumental in driving the responsible integration of AI into our systems, helping codify and reuse knowledge, and reducing manual overhead through automation. Your Impact: Provide technical leadership and direction to your team; act as the go-to resource for system design, scalability, and code quality. Champion AI-aligned work—proactively identifying opportunities for automation, efficiency, and responsible AI use. Develop and maintain scalable, maintainable, and testable software using Vendasta's technology stack. Collaborate with cross-functional teams to translate strategic goals into actionable and executable backlogs, including AI-enhanced solutions. Participate in team sprint planning, offering technical insights and ensuring clear, feasible delivery of work. Mentor and coach team members on traditional and AI-enhanced programming practices. Document long-term technical vision for services owned by the team, ensuring continuity, clarity, and scalability. Lead incident response readiness by ensuring team confidence and capability in emergency scenarios. Advocate for strong engineering practices such as CI/CD, automated testing, RFC creation, and safe experimentation. Contribute to Vendasta’s broader engineering culture through mentorship, tech blogs, internal working groups, and shared repositories. Drive AI literacy by embedding tools, building reusable prompt libraries, and leading AI-forward discussions and demos. What You Bring to the Table: 10+ years of software development experience , preferably within SaaS or technology environments. Proven track record of leading technical teams and delivering complex software solutions. Deep knowledge of software design patterns, scalable architecture, and cloud-native systems. Ability to drive AI adoption through hands-on implementation of AI-enhanced workflows, frameworks, or APIs. Strong communication skills to effectively engage both technical and non-technical stakeholders. Passion for mentoring and growing technical talent within a team. Demonstrated experience working with the Scrum framework and modern development methodologies. Bachelor’s degree in Computer Science or a related field (preferred). Technologies We Use: Cloud Native Computing using Google Cloud Platform BigQuery, Cloud Dataflow, Cloud Pub/Sub, Google Data Studio, Cloud IAM, Cloud Storage, Cloud SQL, Cloud Spanner, Cloud Datastore, Google Maps Platform, Stackdriver, etc. We have been invited to join the Early Access Program on quite a few GCP technologies. GoLang, Typescript, Python, JavaScript, HTML, Angular, GRPC, Kubernetes Elasticsearch, MySQL, PostgreSQL AI prompt libraries, AI-enhanced dev tools, internal AI use cases and frameworks About Vendasta: We help businesses get more customers. And keep them. Vendasta is an AI-powered customer acquisition and engagement platform for SMBs and the partners who support them, streamlining marketing, sales, and operations through intelligent AI employees, automation, and real-time actions and insights. From creating awareness to nurturing lasting customer relationships, Vendasta offers a suite of solutions with AI assistants that streamline every stage of the customer journey. By combining a business's unique data with AI and automation, Vendasta simplifies marketing, sales, and operations, eliminating the need for multiple disjointed systems. Perks: At Vendasta, we believe in supporting our team with meaningful benefits that foster growth, well-being, and a strong sense of community. Our compensation package includes stock options (as per policy), comprehensive health insurance, a provident fund, and generous paid time off including flex days to help you recharge. We support your commute with public transport reimbursement and fuel your ambition through extensive training and career development programs, including professional development plans, leadership workshops, and mentorship opportunities. Our culture anchored in our core values of Drive, Innovation, Respect, and Agility thrives in a collaborative and dynamic environment. Plus, enjoy daily free snacks, hot beverages, and catered lunches every Friday at our office. Discover your potential. Build something that matters. Help us lead the AI revolution from right here in Chennai

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies