Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
2 years
0 Lacs
Chennai, Tamil Nadu, India
Join us in bringing joy to customer experience. Five9 is a leading provider of cloud contact center software, bringing the power of cloud innovation to customers worldwide. Living our values everyday results in our team-first culture and enables us to innovate, grow, and thrive while enjoying the journey together. We celebrate diversity and foster an inclusive environment, empowering our employees to be their authentic selves. The Software Application Engineer II will help design and implement solutions to extract data from different sources like Salesforce, Jira, NetSuite, Logisense using API’s/Python in Google Cloud Platform (GCP) using data pipelines and ensure seamless access to data for business intelligence and data science tools. They will support a wide range of projects while collaborating closely with management teams and business leaders. The ideal candidate will have a strong understanding of data extraction, data warehousing concepts, and the ability to document technical knowledge into clear processes and procedures. Include following language: This position is based out of one of the offices of our affiliate Acqueon Technologies in India, and will adopt the hybrid work arrangements of that location. You will be a member of the Acqueon team with responsibilities supporting Five9 products, collaborating with global teammates based primarily in the United States. Responsibilities Work closely with different team members in finance team to understand the business requirement and prepare data sets for data analysisDesign, implement, and maintain a scalable Data extraction in GCP to structured and unstructured data from various sources (databases, APIs, cloud storage)Utilize GCP services including Big Query, Dataflow, Pub/Sub, Cloud Run Functions, Airflow and Cloud Storage to optimize and manage data workflows, ensuring scalability, performance, and securityContinuously monitor, test, and optimize data workflows to improve performance, cost efficiency, and reliabilityMaintain comprehensive technical documentation of data pipelines, systems, and architecture for knowledge sharing and future developmentEnhancing applications by identifying opportunities for improvement, making recommendations and designing and implementing systemsProvides information by collecting, analyzing, and summarizing development and service issuesAbility to work across multiple facets of a project and juggle multiple tasks at the same time Requirements: Bachelor’s and/or Master’s degree in Computer Science, Computer Engineering or related technical disciplineExperience working with enterprise systems and understanding of data design pattern differences between transactional systems and analytical data warehouses2+ years of experience using GCP Services (Big Query, Dataflow, Pub/Sub, Cloud Run Functions). Certifications in GCP are preferred (e.g. Professional Cloud Developer, Professional Cloud Database Engineer)2+ years of professional or open-source experience to read data from APIs and developing programs in Java, Python, and Java ScriptExpert in building, deploying, and maintaining data pipelines using open-source tools (i.e, Airflow, Composer, DataProc, DataFlow)Experience with Gitlab, Jenkins, CI/CD methodologies & toolsSkilled at writing optimized ETL jobs to feed data into Big Data systems (i.e., Big Query, DOMO, Hadoop)Experience creating dashboards/visualizations using Data Studio, DOMO (or similar)Advanced proficiency with SQL, with experience in writing complex queries, optimizing for performance, and using SQL in large-scale data processing workflowsGood understanding of cloud design considerations and limitations and its impact on PricingAttention to detail and personal pride in work undertakenAbility to self-start and self-direct work in an unstructured environment, comfortable dealing with ambiguity Five9 embraces diversity and is committed to building a team that represents a variety of backgrounds, perspectives, and skills. The more inclusive we are, the better we are. Five9 is an equal opportunity employer. View our privacy policy, including our privacy notice to California residents here: https://www.five9.com/pt-pt/legal. Note: Five9 will never request that an applicant send money as a prerequisite for commencing employment with Five9.
Posted 2 months ago
4 - 7 years
18 - 22 Lacs
Pune
Work from Office
UKG is a leader in the HCM space, and is at the forefront of artificial intelligence innovation, dedicated to developing cutting-edge generative AI solutions that transform the HR / HCM industry and enhance user experiences. We are seeking a talented and motivated AI Engineers to join our dynamic team and contribute to the development of next-generation AI/GenAI based products and solutions. This role will provide you with the opportunity to work on cutting-edge SaaS technologies and impactful projects that are used by enterprises and users worldwide. As a Senior Software Engineer, you will be involved in the design, development, testing, deployment, and maintenance of software solutions. You will work in a collaborative environment, contributing to the technical foundation behind our flagship products and services. Responsibilities: Software Development: Write clean, maintainable, and efficient code or various software applications and systems. GenAI Product Development: Participate in the entire AI development lifecycle, including data collection, preprocessing, model training, evaluation, and deployment.Assist in researching and experimenting with state-of-the-art generative AI techniques to improve model performance and capabilities. Design and Architecture: Participate in design reviews with peers and stakeholders Code Review: Review code developed by other developers, providing feedback adhering to industry standard best practices like coding guidelines Testing: Build testable software, define tests, participate in the testing process, automate tests using tools (e.g., Junit, Selenium) and Design Patterns leveraging the test automation pyramid as the guide. Debugging and Troubleshooting: Triage defects or customer reported issues, debug and resolve in a timely and efficient manner. Service Health and Quality: Contribute to health and quality of services and incidents, promptly identifying and escalating issues. Collaborate with the team in utilizing service health indicators and telemetry for action. Assist in conducting root cause analysis and implementing measures to prevent future recurrences. Dev Ops Model: Understanding of working in a DevOps Model. Begin to take ownership of working with product management on requirements to design, develop, test, deploy and maintain the software in production. Documentation: Properly document new features, enhancements or fixes to the product, and also contribute to training materials. Basic Qualifications: Bachelors degree in Computer Science, Engineering, or a related technical field, or equivalent practical experience. 2+ years of professional software development experience. Proficiency as a developer using Python, FastAPI, PyTest, Celery and other Python frameworks. Experience with software development practices and design patterns. Familiarity with version control systems like Git GitHub and bug/work tracking systems like JIRA. Basic understanding of cloud technologies and DevOps principles. Strong analytical and problem-solving skills, with a proven track record of building and shipping successful software products and services. Preferred Qualifications: Experience with object-oriented programming, concurrency, design patterns, and REST APIs. Experience with CI/CD tooling such as Terraform and GitHub Actions. High level familiarity with AI/ML, GenAI, and MLOps concepts. Familiarity with frameworks like LangChain and LangGraph. Experience with SQL and NoSQL databases such as MongoDB, MSSQL, or Postgres. Experience with testing tools such as PyTest, PyMock, xUnit, mocking frameworks, etc. Experience with GCP technologies such as VertexAI, BigQuery, GKE, GCS, DataFlow, and Kubeflow. Experience with Docker and Kubernetes. Experience with Java and Scala a plus.
Posted 2 months ago
8 - 13 years
37 - 65 Lacs
Trivandrum
Hybrid
Email your updated resume to vishnu.sasidharan@equifax.com with the subject line: Software Engineer - Tech Lead/Specialist Key Responsibilities: Oversee the design, development, and deployment of innovative batch and data products Provide technical direction for Java and GCP implementations Lead and mentor a team of developers and engineers Collaborate with cross-functional teams to translate requirements into technical solutions Implement rigorous testing strategies and optimize performance Maintain documentation and ensure compliance with standards What experience you need Bachelor's or Master's degree in Computer Science, Software Engineering, or a related field Proficiency in Java and its ecosystems Extensive experience with Google Cloud Platform, including GKE, Cloud Storage, Dataflow, BigQuery Minimum of 8 years in software development, focusing on batch processing and data solutions Exceptional communication and teamwork skills Experience with securely handling sensitive data (PII / PHI) Proven track record of writing defect-free code At least 3 years in a leadership role What could set you apart Ability to convey complex technical concepts to non-technical stakeholders Relevant certifications (e.g., Google Cloud Professional Developer, Professional Data Engineer) Experience with multiple cloud platforms (e.g., AWS, Azure) in addition to GCP Proficiency in both Java and Python for versatile development and capabilities Experience in optimizing performance and reducing costs in cloud environments
Posted 2 months ago
6 - 8 years
14 - 18 Lacs
Bengaluru, Kolkata
Work from Office
*Must have exposure in GCP Machine Learning, vertex AI. Collaborated with data engineers, and software developers to facilitate the deployment, monitoring, and maintenance of machine learning models. Experience in training and deploying AutoML and custom tabular/image models using Vertex AI. Hands-on experience in designing, implementing, and maintaining robust data pipelines, data transformation processes, and data storage solutions on GCP, utilizing Apache Beam, Dataflow, and BigQuery. Demonstrated expertise in training, evaluating, and tuning ML models in BigQuery. Hands-on experience with Docker, Kubernetes, Kubeflow, and Cloud Build. Successfully created and managed CI/CD pipelines for machine learning model deployment, ensuring automation, and reproducibility with Cloud Build on GCP. Familiar with security best practices including IAM, Network security, and data encryption in GCP. Experienced in troubleshooting issues across E2E ML lifecycle from data preprocessing to model serving. Adept with Vertex AIs Generative AI studio. Hands-on experience in using Vertex AI PaLM API, including text-bison, chat-bison, and text embedding-gecko. Knowledgeable in Prompt engineering design, techniques, and best practices. Skilled in creating prompts for ideation, text classification, text extraction, and text summarization.**
Posted 2 months ago
5 - 9 years
6 - 10 Lacs
Chennai
Work from Office
Job Description Overview We are seeking an experienced Senior BigQuery Developer to join our team and lead the design, development, and optimization of data pipelines and BigQuery solutions. The ideal candidate will have a strong background in data engineering, SQL, and Google Cloud Platform (GCP) services, with expertise in building scalable, secure, and high-performance data systems. Responsibilities Design, develop, and optimize complex BigQuery solutions to meet business requirements. Create and maintain ETL/ELT pipelines using tools like Dataflow, Cloud Composer, or Apache Airflow . Write and optimize advanced SQL queries for performance and scalability in BigQuery. Collaborate with data analysts, scientists, and stakeholders to understand data requirements and deliver actionable insights. Implement robust data governance, security, and compliance frameworks for BigQuery datasets. Integrate BigQuery with other GCP services like Cloud Storage, Pub/Sub, and Looker Studio for end-to-end data workflows. Develop and manage scheduled jobs and scripts for data ingestion and transformation. Perform root cause analysis and resolve performance issues in data pipelines and BigQuery queries. Stay updated with new features and advancements in Google Cloud technologies. Requirements 5+ years of experience in data engineering, with a focus on Google BigQuery. Proficiency in advanced SQL and experience with query optimization in BigQuery. Hands-on experience with Google Cloud Platform (GCP) services, including Dataflow, Cloud Functions, and Cloud Composer. Strong understanding of data warehouse architecture and best practices. Experience with data modeling and schema design in BigQuery. Familiarity with programming languages such as Python or Java for automation and scripting. Experience working with large datasets and real-time data processing. Knowledge of data governance, security policies, and access controls in GCP.
Posted 2 months ago
15 - 20 years
17 - 22 Lacs
Hyderabad
Work from Office
Data Modeler+ Solution Design Job Summary: The GCP Solution Designer will be responsible for designing and implementing robust data warehouse solutions on Google Cloud Platform (GCP). This role requires deep expertise in GCP services, data modelling, ETL processes, and a strong understanding of business requirements to deliver scalable and efficient data solutions. Key Responsibilities: 1. Solution Design and Architecture: Design comprehensive data solutions on GCP, ensuring scalability, performance, and security. Develop data models and schemas to meet business requirements. Collaborate with stakeholders to gather requirements and translate them into technical specifications. 2. Implementation and Development: Implement ETL processes using GCP tools such as Dataflow, Dataproc, and Cloud Data Fusion. Develop and optimize data pipelines to ensure data integrity and performance. Create and manage data storage solutions using BigQuery and Cloud Storage. 3. Data Management and Optimization: Implement best practices for data management, including data governance, quality, and lifecycle management. Optimize query performance and storage costs by leveraging GCP features and tools. 4. Collaboration and Communication: Work closely with data engineers, analysts, and other stakeholders to ensure seamless integration of data solutions. Provide technical guidance and mentorship to junior team members. Communicate complex technical concepts to non-technical stakeholders effectively. 5. Continuous Improvement: Stay updated with the latest advancements in GCP services and data warehousing technologies. Evaluate and recommend new tools and technologies to enhance data solutions. Continuously improve existing data solutions to meet evolving business
Posted 3 months ago
8 - 11 years
10 - 13 Lacs
Hyderabad
Work from Office
10+ years of experience in data engineering+ with a focus on cloud-based solutions. Experience in designing of solution and should be able to review and help team Extensive experience with Google Cloud Platform (GCP) and its data services+ including BigQuery+ DBT & Streaming+ Dataflow+ Pub+Sub+ Cloud Storage+ and Cloud Composer. Proven track record of designing and building scalable data pipelines and architectures. Experience with ETL tools and processes. Design+ develop+ and maintain robust and scalable data pipelines using GCP services such as Dataflow+ Pub+Sub+ Cloud Functions+ and Cloud Composer. Implement ETL (Extract+ Transform+ Load) processes to ingest data from various sources into GCP data warehouses like BigQuery.
Posted 3 months ago
5 - 8 years
7 - 10 Lacs
Bengaluru
Work from Office
To ensure successful initiation, planning, execution, control and completion of the project by guiding team members on technical aspects, conducting reviews of technical documents and artefacts. Lead project development, production support and maintenance activities. Fill and ensure timesheets are completed, as is the invoicing process, on or before the deadline. Lead the customer interface for the project on an everyday basis, proactively addressing any issues before they are escalated. Create functional and technical specification documents. Track open tickets/ incidents in queue and allocate tickets to resources and ensure that the tickets are closed within the deadlines. Ensure analysts adhere to SLA/KPI/OLA Ensure that all in the delivery team, including self, are constantly thinking of ways to do things faster, better or in a more economic manner. Lead and ensure project is in compliance with Software Quality Processes and within timelines. Review functional and technical specification documents. Serve as the single point of contact for the team to the project stakeholders. Promote team work, motivate, mentor and develop subordinates. Band: U3 Competency: Data & Analytics
Posted 3 months ago
15 - 17 years
17 - 19 Lacs
Pune
Work from Office
Position Overview We are seeking a dynamic and experienced Enterprise Solution Architect to lead the design and implementation of innovative solutions that align with our organization's strategic objectives. The Enterprise Solution Architect will play a key role in defining the architecture vision+ establishing technical standards+ and driving the adoption of best practices across the enterprise. The ideal candidate will have a deep understanding of enterprise architecture principles+ business processes+ and technology trends+ with a focus on delivering scalable+ flexible+ and secure solution Responsibilities • Drive client conversations+ solutions and build strong relationships with client+ acting as a trusted advisor and technical expert. Experienced in laying down Architectural roadmap+ guidelines and High Level Design covering E2E lifecycle of data value chain from ingestion+ integration+ consumption (visualization+ AI capabilities)+ data governance and non-functionals (incl. data security) Experienced in delivering large scale data platform implementations for Telecom clients. Must have Telecom domain understanding. Experienced in implementation of data applications and platform on GCP. Execution of a comprehensive data migration strategy for our telecom client+ involving multiple source systems to GCP. Deep dive into client requirements to understand their data needs and challenges. Proactively propose solutions that leverage GCP's capabilities or integrate with external tools for optimal results. Spearhead solution calls with the client+ translating complex data architecture and engineering concepts into clear+ actionable plans for data engineers. Demonstrate flexibility and adaptability to accommodate evolving needs. Develop a robust data model for the telecom client+ ensuring data is organized+ consistent+ and readily available for analysis. Leverage your expertise in Data+ AI+ and ML to create a future-proof blueprint for the client's data landscape+ enabling advanced analytics and insights generation. Develop architectural principles+ standards+ and guidelines to ensure consistency+ interoperability+ and scalability across systems and applications. Lead the design and implementation of end-to-end solutions that leverage emerging technologies and industry best practices to address business challenges and opportunities. Conduct architectural reviews and assessments to validate design decisions+ identify risks+ and recommend mitigation strategies. Collaborate with vendors+ partners+ and external consultants to evaluate and select technology solutions that meet business requirements and align with enterprise architecture standards. Drive the adoption of cloud computing+ microservices architecture+ API management+ and other emerging technologies to enable digital transformation and innovation. Communicate the enterprise architecture vision+ principles+ and roadmap to stakeholders at all levels of the organization+ and advocate for architectural decisions and investments. Qualifications • Bachelor's degree in Computer Science+ Engineering+ or a related field. Total experience of 18+ years on data analytics implementations. Minimum 10+ years of extensive experience as a Principal Solution Architect or similar senior role. Proven success in leading large-scale data migrations+ particularly to GCP. In-depth knowledge of data architecture principles and best practices. Strong understanding of data modeling techniques and the ability to create efficient data models. Experience working with GCP and its various data management services (e.g.+ BigQuery+ Cloud Storage+ Dataflow+ dbt). Experience with at least one programming language commonly used in data processing (e.g.+ Python+ Java). A demonstrable understanding of Data Science+ Artificial Intelligence+ and Machine Learning concepts.
Posted 3 months ago
5 - 8 years
14 - 16 Lacs
Bengaluru
Remote
Hi all, We are hiring for the role Python & GCP engineer Experience: 5+ Years Location: Bangalore Notice Period: Immediate - 15 days Skills: Technical Expertise: Languages: Python, SQL, Shell scripting Big Data: Kafka, PySpark, data warehousing, data lakes Cloud Platforms: GCP: GCS, BigQuery, Pub/Sub, Dataproc, Dataflow, Cloud Functions Database: Schema design, optimization, stored procedures DevOps: CI/CD pipeline implementation, multi-cloud deployment automation Development: Parallel processing, streaming, low-level design If you are interested drop your resume at mojesh.p@acesoftlabs.com Call: 9701971793
Posted 3 months ago
2 - 5 years
4 - 8 Lacs
Pune
Work from Office
Capgemini Invent Capgemini Invent is the digital innovation, consulting and transformation brand of the Capgemini Group, a global business line that combines market leading expertise in strategy, technology, data science and creative design, to help CxOs envision and build whats next for their businesses. Your Role Should have developed/Worked for atleast 1 Gen AI project. Has data pipeline implementation experience with any of these cloud providers - AWS, Azure, GCP. Experience with cloud storage, cloud database, cloud data warehousing and Data lake solutions like Snowflake, Big query, AWS Redshift, ADLS, S3. Has good knowledge of cloud compute services and load balancing. Has good knowledge of cloud identity management, authentication and authorization. Proficiency in using cloud utility functions such as AWS lambda, AWS step functions, Cloud Run, Cloud functions, Azure functions. Experience in using cloud data integration services for structured, semi structured and unstructured data such as Azure Databricks, Azure Data Factory, Azure Synapse Analytics, AWS Glue, AWS EMR, Dataflow, Dataproc. Your Profile Good knowledge of Infra capacity sizing, costing of cloud services to drive optimized solution architecture, leading to optimal infra investment vs performance and scaling. Able to contribute to making architectural choices using various cloud services and solution methodologies. Expertise in programming using python. Very good knowledge of cloud Dev-ops practices such as infrastructure as code, CI/CD components, and automated deployments on cloud. Must understand networking, security, design principles and best practices in cloud. What you will love about working here We recognize the significance of flexible work arrangements to provide support. Be it remote work, or flexible work hours, you will get an environment to maintain healthy work life balance. At the heart of our mission is your career growth. Our array of career growth programs and diverse professions are crafted to support you in exploring a world of opportunities. Equip yourself with valuable certifications in the latest technologies such as Generative AI. About Capgemini Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, cloud and data, combined with its deep industry expertise and partner ecosystem. The Group reported 2023 global revenues of 22.5 billion.
Posted 3 months ago
6 - 10 years
8 - 12 Lacs
Mumbai
Hybrid
6+ years experience writing, debugging, and troubleshooting code in mainstream Core Java. Experience with Cloud technology: GCP, AWS, or Azure ( Added Advantage) Design and build scalable and reliable data pipelines using Google Cloud services Required Candidate profile Apply modern software development practices (serverless computing, microservices architecture, CI/CD, infrastructure-as-code, etc.) Performance monitoring using Google Data Studio or similar tools.
Posted 3 months ago
8 - 10 years
10 - 12 Lacs
Mumbai
Hybrid
Wanted to see stronger candidates with GCP, Dataflow, Java, Spring, GKE etc.. Along with responsibility, accountability & ownership. Hands on Coding & Programming. Good Communication skills.Immediate Joiner only
Posted 3 months ago
6 - 10 years
8 - 12 Lacs
Bangalore Rural
Hybrid
6+ years experience writing, debugging, and troubleshooting code in mainstream Core Java. Experience with Cloud technology: GCP, AWS, or Azure ( Added Advantage) Design and build scalable and reliable data pipelines using Google Cloud services Required Candidate profile Apply modern software development practices (serverless computing, microservices architecture, CI/CD, infrastructure-as-code, etc.) Performance monitoring using Google Data Studio or similar tools.
Posted 3 months ago
8 - 10 years
10 - 12 Lacs
Bangalore Rural
Hybrid
Wanted to see stronger candidates with GCP, Dataflow, Java, Spring, GKE etc.. Along with responsibility, accountability & ownership. Hands on Coding & Programming. Good Communication skills.Immediate Joiner only
Posted 3 months ago
6 - 10 years
8 - 12 Lacs
Pune
Hybrid
6+ years experience writing, debugging, and troubleshooting code in mainstream Core Java. Experience with Cloud technology: GCP, AWS, or Azure ( Added Advantage) Design and build scalable and reliable data pipelines using Google Cloud services Required Candidate profile Apply modern software development practices (serverless computing, microservices architecture, CI/CD, infrastructure-as-code, etc.) Performance monitoring using Google Data Studio or similar tools.
Posted 3 months ago
8 - 10 years
10 - 12 Lacs
Pune
Hybrid
Wanted to see stronger candidates with GCP, Dataflow, Java, Spring, GKE etc.. Along with responsibility, accountability & ownership. Hands on Coding & Programming. Good Communication skills.Immediate Joiner only
Posted 3 months ago
3 - 7 years
5 - 9 Lacs
Pune
Work from Office
Our team is part of the area Technology, Data, and Innovation (TDI) Private Bank. Within TDI, Partner data is the central client reference data system in Germany. As a core banking system, many banking processes and applications are integrated and communicate via >2k interfaces. From a technical perspective, we focus on mainframe but also build solutions on premise cloud, restful services, and an angular frontend. Next to the maintenance and the implementation of new CTB requirements, the content focus also lies on the regulatory and tax topics surrounding a partner/ client. We are looking for a very motivated candidate for the Cloud Data Engineer area. Your key responsibilities You are responsible for the implementation of the new project on GCP (Spark, Dataproc, Dataflow, BigQuery, Terraform etc) in the whole SDLC chain You are responsible for the support of the migration of current functionalities to Google Cloud You are responsible for the stability of the application landscape and support software releases You also support in L3 topics and application governance You are responsible in the CTM area for coding as part of an agile team (Java, Scala, Spring Boot) Your skills and experience You have experience with databases (BigQuery, Cloud SQl, No Sql, Hive etc.) and development preferably for Big Data and GCP technologies Strong understanding of Data Mesh Approach and integration patterns Understanding of Party data and integration with Product data Your architectural skills for big data solutions, especially interface architecture allows a fast start You have experience in at least: Spark, Java ,Scala and Python, Maven, Artifactory, Hadoop Ecosystem, Github Actions, GitHub, Terraform scripting You have knowledge in customer reference data, customer opening processes and preferably regulatory topics around know your customer processes You can work very well in teams but also independent and are constructive and target oriented Your English skills are good and you can both communicate professionally but also informally in small talks with the team
Posted 3 months ago
10 - 14 years
25 - 30 Lacs
Pune
Work from Office
Role Description The Business Architect defines the technical solution design of specific IT platforms and provides guidance to the squad members in order to design, build, test and deliver high quality software solutions. A key element in this context is translation of functional and non-functional business requirements into an appropriate technical solution design, leveraging best practices and consistent design patterns. The Business Architect collaborates closely with Product Owners Chapter leads and Squad members to ensure consisten adherence to the agreed-upon application design and is responsible for maintaining an appropriate technical design documentation. The Solution architect ensures that the architectures and designs of solutions conform to the principles blueprints, standards, patterns etc,that have been established by the Enterprise Architecture in this context the Business Architect is closely collaborating with the respective solution Architect to ensure architecture compliance. The business Architect also actively contributes to the definition and enrichment of design patterns and standards with the aim to leverage those across squads and tribes. Your key responsibilities Define the technical Architecture of IT Solutions in line with functional and non-functional requirements following consistent design patterns and best practices. Ensure that the solution design is in sync with WM target Architecture blueprints and principles, as well as with overarching DB architecture and security standards. Create appropriate technical design documentation and ensure this is kept up-to-date. Provide guidance to the squad members to design, build, test and deliver high quality software solutions in line with business requirements Responsible for all aspects of the solution architecture (i.e. Maintainablity, scalability, effective integration with other solutions, usage of shared solutions and components where possible, optimization of the resource consumption etc. ) with the object to meet the appropriate balance between business needs and total cost of ownership Closely collaborate with enterprise architecture to ensure architecture compliance and make sure that any design options are discussed in a timely manner to allow sufficient time for deliberate decision taking Present architecture proposals to relevant forums along with enterprise architect at different levels and drive the process to gain the necessary architecture approvals. Collaborate with relevant technology stakeholders within other squads and across tribes to ensure the cross-squad and cross-tribe solution architecture synchronization and alignment Contribute to definition and enrichment of appropriate design patterns and standards that can be leveraged across WM squads tribes Serve as a Counsel to designers and developers and carry out reviews of software designs and high level detailed level design documentation provided by other squad members Lead the technical discussions with CCO, Data factory, Central Data quality and Complience, end to end and control functions for technical queries contribute to peer level solution architecture reviews e.g. within a respective chapter Your skills and experience Ability experience in defining the high level and low level technical solution designs for complex initiatives very good analytical skills and ability to oversee structure complex tasks Hands on skills with various google cloud components like storage buckets, BigQuery, Dataproc, cloud composer, cloud functions etc aling with Pyspark, Scala is essential,. Good to have experience in Cloud SQL, Dataflow, Java and Unix Experience with implementing a google cloud based solution is essesntial persuasive power and persistence in driving adherence to solution design within the squad Ability to apply the appropriate architectural patterns considering the relevant functional and nonfunctional requirements proven ability to balance business demands and IT capabilities in terms of standardization reducing risk and increasing the IT flexibility comfortable working in an open, highly collaborative team ability to work in an agile and dynamic environment and to build up the knowledge related to new technology/ solutions in an effective and timely manner ability to communicate effectively with other technology stakeholders feedback: seek feedback from others, provides feedback to others in support of their development and is open and honest while dealing constructively with criticism inclusive leadership: values individuals and embraces diversity by integrating differences in promoting diversity and inclusion across teams and functions coaching: understands and anticipates people's needs skills and abilities in order to coach, motivate and empower them for success broad set of architecture knowledge and application design skills and - depending on the specific squad requirements - in-depth expertise with regards to specific architecture domains (e.g. service and integration architecture web and mobile front end architecture guitar architecture security architecture infrastructure architecture) and related technology stacks and design patterns experience in establishing thought leadership in solution architecture practices and ability to lead design and development teams and defining building and delivering first class software solutions familiar with current and emerging technologies, tools, frameworks and design patterns experience in effectively collaborating across multiple teams and geographies ability to appropriately consider other dimensions(e.g. financials, risk, time to market) On top of the architecture drivers in order to propose balanced and physical architecture solutions Experience Qualifications : 10+ years relevant experience as technology Manager within the IT support industry experience in financial banking industry preferred Minimum 8 years' experience supporting Oracle Platform in a mid-size to large corporate environment Preferably from Banking Wealth Management experience Must have experience working in agile organization.
Posted 3 months ago
15 - 20 years
17 - 22 Lacs
Pune
Work from Office
Engineer is responsible for managing or performing work across multiple areas of the bank's overall IT Platform/Infrastructure including analysis, development, and administration. It may also involve taking functional oversight of engineering delivery for specific departments. Work includes: Planning and developing entire engineering solutions to accomplish business goals Building reliability and resiliency into solutions with appropriate testing and reviewing throughout the delivery lifecycle Ensuring maintainability and reusability of engineering solutions Ensuring solutions are well architected and can be integrated successfully into the end-to-end business process flow Reviewing engineering plans and quality to drive re-use and improve engineering capability Participating in industry forums to drive adoption of innovative technologies, tools and solutions in the Bank Your Key Responsibilities: The candidate is expected to; Hands-on engineering lead involved in analysis, design, design/code reviews, coding and release activities Champion engineering best practices and guide/mentor team to achieve high performance. Work closely with Business stakeholders, Tribe lead, Product Owner, Lead Architect to successfully deliver the business outcomes. Acquire functional knowledge of the business capability being digitized/re-engineered. Demonstrate ownership, inspire others, innovative thinking, growth mindset and collaborate for success. Your Skills & Experience: Minimum 15 years of IT industry experience in Full stack development Expert in Java, Spring Boot, NodeJS, SQL/PLSQL, ReactJS, Strong experience in Big data processing Apache Spark, Hadoop, Bigquery, DataProc, Dataflow etc Strong experience in Kubernetes, OpenShift container platform Experience with Databases Oracle, PostgreSQL, MongoDB, Redis/hazelcast, should understand data modeling, normalization, and performance optimization Experience in message queues (RabbitMQ/IBM MQ, JMS) and Data streaming i.e. Kafka, Pub-sub etc Experience of working on public cloud GCP preferred, AWS or Azure Knowledge of various distributed/multi-tiered architecture styles Micro-services, Data mesh, Integration patterns etc Experience on modern software product delivery practices, processes and tooling and BIzDevOps skills such as CI/CD pipelines using Jenkins, Git Actions etc Experience on designing solutions, based on DDD and implementing Clean Hexagonal Architecture efficient systems that can handle large-scale operation Experience on leading teams and mentoring developers Focus on quality experience with TDD, BDD, Stress and Contract Tests Proficient in working with APIs (Application Programming Interfaces) and understand data formats like JSON, XML, YAML, Parquet etc Key Skills: Java Spring Boot NodeJS SQL/PLSQL ReactJS Advantageous: Having prior experience in Banking/Finance domain Having worked on hybrid cloud solutions preferably using GCP Having worked on product development How we'll support you: Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs Our values define the working environment we strive to create diverse, supportive and welcoming of different views. We embrace a culture reflecting a variety of perspectives, insights and backgrounds to drive innovation. We build talented and diverse teams to drive business results and encourage our people to develop to their full potential. Talk to us about flexible work arrangements and other initiatives we offer. We promote good working relationships and encourage high standards of conduct and work performance. We welcome applications from talented people from all cultures, countries, races, genders, sexual orientations, disabilities, beliefs and generations and are committed to providing a working environment free from harassment, discrimination and retaliation. Visit to discover more about the culture of Deutsche Bank including Diversity, Equity & Inclusion, Leadership, Learning, Future of Work and more besides.
Posted 3 months ago
7 - 12 years
10 - 20 Lacs
Pune
Hybrid
Lead Data Engineer Experience: 7 - 10 Years Salary: Competitive Preferred Notice Period : within 30 days Shift : 10:00 AM to 6:00 PM IST Opportunity Type: Hybrid - Pune Placement Type: Permanent (*Note: This is a requirement for one of Uplers' Partners) What do you need for this opportunity? Must have skills required : Python, SQL, GCP, Dataflow, Pub/Sub, Cloud Storage, Big Query Good to have skills : AWS, Docker, Kubernetes, Generative AI, Azure Our Hiring Partner is Looking for: Lead Data Engineer who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player with a positive attitude and a desire to make a difference, then we want to hear from you. Role Overview Description We are seeking an experienced and dynamic Lead Data Engineer to join our team. This role is pivotal in advancing our data engineering practices on the Google Cloud Platform (GCP) and offers a unique opportunity to work with cutting-edge technologies, including Generative AI. Key Responsibilities: Lead the design, implementation, and optimization of scalable data pipelines and architectures on GCP using key services such as Big Query, Dataflow, Pub/Sub, and Cloud Storage. Collaborate with cross-functional teams to define data requirements and develop strategic solutions that address business needs. Enhance existing data infrastructure, ensuring high levels of performance, reliability, and security. Drive the integration and deployment of machine learning models and advanced analytics solutions, incorporating Generative AI where applicable. Establish and enforce best practices in data governance, data quality, and data security. Mentor and guide junior engineers, fostering a culture of innovation and continuous improvement. Stay informed about the latest trends in data engineering, GCP advancements, and Generative AI technologies to drive innovation within the team. Qualifications: Bachelors or masters degree in computer science, Information Technology, or a related field. 7+ to 10 years of experience in data engineering, with a strong emphasis on GCP technologies. Demonstrated expertise in building and managing data solutions using GCP services like Big Query, Dataflow, and Cloud Composer. Proficiency in SQL and programming languages such as Python, Java, or Scala. Strong understanding of data modelling, warehousing concepts, and real-time data processing. Familiarity with containerization and orchestration tools like Docker and Kubernetes. Excellent analytical, problem-solving, and communication skills. Leadership experience with a proven ability to mentor and develop junior team members. Preferred Qualifications: GCP Professional Data Engineer certification. Experience with Generative AI technologies and their practical applications. Knowledge of additional cloud platforms such as AWS or Azure. Experience with implementing data governance frameworks and tools. How to apply for this opportunity Register or log in on our portal Click 'Apply,' upload your resume, and fill in the required details. Post this, click Apply Now' to submit your application. Get matched and crack a quick interview with our hiring partner. Land your global dream job and get your exciting career started! About Our Hiring Partner: At Inferenz, our team of innovative technologists and domain experts help accelerating the business growth through digital enablement and navigating the industries with data, cloud and AI services and solutions. We dedicate our resources to increase efficiency and gain a greater competitive advantage by leveraging various next generation technologies. Our technology expertise has helped us delivering the innovative solutions in key industries such as Healthcare & Life Sciences, Consumer & Retail, Financial Services and Emerging industries. About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. You will also be assigned to a dedicated Talent Success Coach during the engagement. ( Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 3 months ago
6 - 10 years
30 - 35 Lacs
Bengaluru
Work from Office
We are seeking an experienced PySpark Developer / Data Engineer to design, develop, and optimize big data processing pipelines using Apache Spark and Python (PySpark). The ideal candidate should have expertise in distributed computing, ETL workflows, data lake architectures, and cloud-based big data solutions. Key Responsibilities: Develop and optimize ETL/ELT data pipelines using PySpark on distributed computing platforms (Hadoop, Databricks, EMR, HDInsight). Work with structured and unstructured data to perform data transformation, cleansing, and aggregation. Implement data lake and data warehouse solutions on AWS (S3, Glue, Redshift), Azure (ADLS, Synapse), or GCP (BigQuery, Dataflow). Optimize PySpark jobs for performance tuning, partitioning, and caching strategies. Design and implement real-time and batch data processing solutions. Integrate data pipelines with Kafka, Delta Lake, Iceberg, or Hudi for streaming and incremental updates. Ensure data security, governance, and compliance with industry best practices. Work with data scientists and analysts to prepare and process large-scale datasets for machine learning models. Collaborate with DevOps teams to deploy, monitor, and scale PySpark jobs using CI/CD pipelines, Kubernetes, and containerization. Perform unit testing and validation to ensure data integrity and reliability. Required Skills & Qualifications: 6+ years of experience in big data processing, ETL, and data engineering. Strong hands-on experience with PySpark (Apache Spark with Python). Expertise in SQL, DataFrame API, and RDD transformations. Experience with big data platforms (Hadoop, Hive, HDFS, Spark SQL). Knowledge of cloud data processing services (AWS Glue, EMR, Databricks, Azure Synapse, GCP Dataflow). Proficiency in writing optimized queries, partitioning, and indexing for performance tuning. Experience with workflow orchestration tools like Airflow, Oozie, or Prefect. Familiarity with containerization and deployment using Docker, Kubernetes, and CI/CD pipelines. Strong understanding of data governance, security, and compliance (GDPR, HIPAA, CCPA, etc.). Excellent problem-solving, debugging, and performance optimization skills.
Posted 3 months ago
6 - 10 years
30 - 35 Lacs
Bengaluru
Work from Office
We are seeking an experienced PySpark Developer / Data Engineer to design, develop, and optimize big data processing pipelines using Apache Spark and Python (PySpark). The ideal candidate should have expertise in distributed computing, ETL workflows, data lake architectures, and cloud-based big data solutions. Key Responsibilities: Develop and optimize ETL/ELT data pipelines using PySpark on distributed computing platforms (Hadoop, Databricks, EMR, HDInsight). Work with structured and unstructured data to perform data transformation, cleansing, and aggregation. Implement data lake and data warehouse solutions on AWS (S3, Glue, Redshift), Azure (ADLS, Synapse), or GCP (BigQuery, Dataflow). Optimize PySpark jobs for performance tuning, partitioning, and caching strategies. Design and implement real-time and batch data processing solutions. Integrate data pipelines with Kafka, Delta Lake, Iceberg, or Hudi for streaming and incremental updates. Ensure data security, governance, and compliance with industry best practices. Work with data scientists and analysts to prepare and process large-scale datasets for machine learning models. Collaborate with DevOps teams to deploy, monitor, and scale PySpark jobs using CI/CD pipelines, Kubernetes, and containerization. Perform unit testing and validation to ensure data integrity and reliability. Required Skills & Qualifications: 6+ years of experience in big data processing, ETL, and data engineering. Strong hands-on experience with PySpark (Apache Spark with Python). Expertise in SQL, DataFrame API, and RDD transformations. Experience with big data platforms (Hadoop, Hive, HDFS, Spark SQL). Knowledge of cloud data processing services (AWS Glue, EMR, Databricks, Azure Synapse, GCP Dataflow). Proficiency in writing optimized queries, partitioning, and indexing for performance tuning. Experience with workflow orchestration tools like Airflow, Oozie, or Prefect. Familiarity with containerization and deployment using Docker, Kubernetes, and CI/CD pipelines. Strong understanding of data governance, security, and compliance (GDPR, HIPAA, CCPA, etc.). Excellent problem-solving, debugging, and performance optimization skills.
Posted 3 months ago
12 - 17 years
35 - 60 Lacs
Chennai, Bengaluru
Hybrid
At ZoomInfo, we encourage creativity, value innovation, demand teamwork, expect accountability and cherish results. We value your take charge, take initiative, get stuff done attitude and will help you unlock your growth potential. One great choice can change everything. Thrive with us at ZoomInfo. Zoominfo is a rapidly growing data-driven company, and as such- we understand the importance of a comprehensive and solid data solution to support decision making in our organization. Our vision is to have a consistent, democratized, and accessible single source of truth for all company data analytics and reporting. Our goal is to improve decision-making processes by having the right information available when it is needed. As a Principal Software Engineer in our Data Platform infrastructure team you'll have a key role in building and designing the strategy of our Enterprise Data Engineering group. What You'll do: Design and build a highly scalable data platform to support data pipelines for diversified and complex data flows. Track and identify relevant new technologies in the market and push their implementation into our pipelines through research and POC activities. Deliver scalable, reliable and reusable data solutions. Leading, building and continuously improving our data gathering, modeling, reporting capabilities and self-service data platforms. Working closely with Data Engineers, Data Analysts, Data Scientists, Product Owners, and Domain Experts to identify data needs. Develop processes and tools to monitor, analyze, maintain and improve data operation, performance and usability. What you bring: Relevant Bachelor degree or other equivalent Software Engineering background. 12+ years of experience as an infrastructure / data platform / big data software engineer. Experience with AWS/GCP cloud services such as GCS/S3, Lambda/Cloud Function, EMR/Dataproc, Glue/Dataflow, Athena. IaC design and hands-on experience. Familiarity designing CI/CD pipelines with Jenkins, Github Actions, or similar tools. Experience in designing, building and maintaining enterprise systems in a big data environment on public cloud. Strong SQL abilities and hands-on experience with SQL, performing analysis and performance optimizations. Hands-on experience in Python or equivalent programming language. Experience with administering data warehouse solutions (like Bigquery/ Redshift/ Snowflake). Experience with data modeling, data catalog concepts, data formats, data pipelines/ETL design, implementation and maintenance. Experience with Airflow and DBT - advantage Experience with Kubernetes using GKE or EKS - advantage.. Experience with development practices Agile, TDD - advantage
Posted 3 months ago
10 - 19 years
22 - 30 Lacs
Chennai, Bengaluru, Hyderabad
Work from Office
Location : Chennai/Bangalore/Noida/Hyderabad Hive, Python, Java or SQL, Hadoop, Spark, ETL, GCP BigQuery, Cloud SQL, Dataflow, Dataproc, Cloud build, cloud run, cloud functions, pub-sub, cloud composer Data lake, Multi cloud (e.g.,Google Cloud, AWS, Azure)
Posted 3 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2