Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
10 - 16 years
35 - 100 Lacs
Mumbai
Work from Office
Job Summary As an ATS (Account Technology Specialist) in NetApp’s Sales function, you will utilize strong customer handling and technical competencies to set objectives and execute plans for winning sales campaigns. This challenging and high-visibility position provides a huge opportunity to grow in your career and cover the largest account base in the region. You develop long-term strategies and shorter-term plans to meet aggressive performance goals with the channel partners and internal stakeholders, including the Client Executive and the District Manager. You must be extremely results driven, customer focused, tech savvy, and skilled at building internal relationships and external partnerships. Essential Functions Provide technical oversights to channel partners and customers within the territory to drive all pertinent issues sales campaigns and goal attainment. Work towards meeting target along with the client executive for the territory by devising short terms goals and long term strategies in the assigned accounts. Evangelise NetApp’s proposition in the assigned territory. Drive technical closures for any sales campaign positioning NetApp as the most viable solution for prospective customers. Job Requirements Excellent verbal and written communications skills, including presentation skills. Proven experience in presales, designing and proposing technical solutions. Excellent presentation, relationship building and negotiating skills. Ability to work collaboratively with functional peers across functions including Marketing, Sales, Sales Operations, Customer Support, and Product Development. Strong understanding of Data storage, Data Protection, Disaster recovery and competitive offerings in the marketplace. Understanding of Cloud technologies is highly desirable. Ability to convey and analyze information clearly as needed to help customer make buying decisions. An excellent understanding how technology products and solutions solve business problems. The ability to hold key technical decision maker and CXO relationships within major accounts in the territory assigned. Education At least 15 years of experience in Technical presales. A Bachelor of Sciences Degree in Engineering, Computer Science; or related field is preferred; a Graduate Degree is mandatory.
Posted 2 months ago
3 - 8 years
15 - 30 Lacs
Pune, Gurugram, Bengaluru
Hybrid
Salary: 15 to 30 LPA Exp: 3 to 8 years Location : Gurgaon/Bangalore/Pune/Chennai Notice: Immediate to 30 days..!! Key Responsibilities & Skillsets: Common Skillsets : 3+ years of experience in analytics, Pyspark, Python, Spark, SQL and associated data engineering jobs. Must have experience with managing and transforming big data sets using pyspark, spark-scala, Numpy pandas Excellent communication & presentation skills Experience in managing Python codes and collaborating with customer on model evolution Good knowledge of data base management and Hadoop/Spark, SQL, HIVE, Python (expertise). Superior analytical and problem solving skills Should be able to work on a problem independently and prepare client ready deliverable with minimal or no supervision Good communication skill for client interaction Data Management Skillsets: Ability to understand data models and identify ETL optimization opportunities. Exposure to ETL tools is preferred Should have strong grasp of advanced SQL functionalities (joins, nested query, and procedures). Strong ability to translate functional specifications / requirements to technical requirements
Posted 2 months ago
3 - 8 years
15 - 30 Lacs
Pune, Gurugram, Bengaluru
Hybrid
Salary: 15 to 30 LPA Exp: 3 to 8 years Location : Gurgaon/Bangalore/Pune/Chennai Notice: Immediate to 30 days..!! Key Responsibilities & Skillsets: Common Skillsets : 3+ years of experience in analytics, Pyspark, Python, Spark, SQL and associated data engineering jobs. Must have experience with managing and transforming big data sets using pyspark, spark-scala, Numpy pandas Excellent communication & presentation skills Experience in managing Python codes and collaborating with customer on model evolution Good knowledge of data base management and Hadoop/Spark, SQL, HIVE, Python (expertise). Superior analytical and problem solving skills Should be able to work on a problem independently and prepare client ready deliverable with minimal or no supervision Good communication skill for client interaction Data Management Skillsets: Ability to understand data models and identify ETL optimization opportunities. Exposure to ETL tools is preferred Should have strong grasp of advanced SQL functionalities (joins, nested query, and procedures). Strong ability to translate functional specifications / requirements to technical requirements
Posted 2 months ago
5 - 8 years
15 - 25 Lacs
Pune
Hybrid
Role & responsibilities Data Pipeline Development: Design, develop, and maintain data pipelines utilizing Google Cloud Platform (GCP) services like Dataflow, Dataproc, and Pub/Sub. Data Ingestion & Transformation: Build and implement data ingestion and transformation processes using tools such as Apache Beam and Apache Spark. Data Storage Management: Optimize and manage data storage solutions on GCP, including BigQuery, Cloud Storage, and Cloud SQL. Security Implementation: Implement data security protocols and access controls with GCP's Identity and Access Management (IAM) and Cloud Security Command Center. System Monitoring & Troubleshooting: Monitor and troubleshoot data pipelines and storage solutions using GCP's Stackdriver and Cloud Monitoring tools. Generative AI Systems: Develop and maintain scalable systems for deploying and operating generative AI models, ensuring efficient use of computational resources. Gen AI Capability Building: Build generative AI capabilities among engineers, covering areas such as knowledge engineering, prompt engineering, and platform engineering. Knowledge Engineering: Gather and structure domain-specific knowledge to be utilized by large language models (LLMs) effectively. Prompt Engineering: Design effective prompts to guide generative AI models, ensuring relevant, accurate, and creative text output. Collaboration: Work with data experts, analysts, and product teams to understand data requirements and deliver tailored solutions. Automation: Automate data processing tasks using scripting languages such as Python. Best Practices: Participate in code reviews and contribute to establishing best practices for data engineering within GCP. Continuous Learning: Stay current with GCP service innovations and advancements. Core data services (GCS, BigQuery, Cloud Storage, Dataflow, etc.). Skills and Experience: Experience: 5+ years of experience in Data Engineering or similar roles. Proficiency in GCP: Expertise in designing, developing, and deploying data pipelines, with strong knowledge of GCP core data services (GCS, BigQuery, Cloud Storage, Dataflow, etc.). Generative AI & LLMs: Hands-on experience with Generative AI models and large language models (LLMs) such as GPT-4, LLAMA3, and Gemini 1.5, with the ability to integrate these models into data pipelines and processes. Experience in Webscraping Technical Skills: Strong proficiency in Python and SQL for data manipulation and querying. Experience with distributed data processing frameworks like Apache Beam or Apache Spark is a plus. Security Knowledge: Familiarity with data security and access control best practices. • Collaboration: Excellent communication and problem-solving skills, with a demonstrated ability to collaborate across teams. Project Management: Ability to work independently, manage multiple projects, and meet deadlines. Preferred Knowledge: Familiarity with Sustainable Finance, ESG Risk, CSRD, Regulatory Reporting, cloud infrastructure, and data governance best practices. Bonus Skills: Knowledge of Terraform is a plus. Education: Degree: Bachelors or masters degree in computer science, Information Technology, or a related field. Experience: 3-5 years of hands-on experience in data engineering. Certification: Google Professional Data Engineer
Posted 2 months ago
5 - 10 years
7 - 12 Lacs
Chennai
Work from Office
Project Role : Technology Architect Project Role Description : Review and integrate all application requirements, including functional, security, integration, performance, quality and operations requirements. Review and integrate the technical architecture requirements. Provide input into final decisions regarding hardware, network products, system software and security. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : Cloud Data Architecture Minimum 5 year(s) of experience is required Educational Qualification : BE or MCA Summary :As a Technology Architect, you will be responsible for reviewing and integrating all application requirements, including functional, security, integration, performance, quality, and operations requirements. Your typical day will involve reviewing and integrating technical architecture requirements, providing input into final decisions regarding hardware, network products, system software, and security. Roles & Responsibilities:Should have a minimum of 8 years of experience in Databricks Unified Data Analytics Platform. Good experience in implementing data ingestion pipelines from multiple sources and creating end to end data pipeline on Databricks platform Should have strong educational background in technology and information architectures, along with a proven track record of delivering impactful data-driven solutions. Strong requirement analysis and technical solutioning skill in Data and Analytics Client facing role in terms of running solution workshops, client visits, handled large RFP pursuits and managed multiple stakeholders. Technical Experience:6 or more years of experience in implementing data ingestion pipelines from multiple sources and creating end to end data pipeline on Databricks platform. 2 or more years of experience using Python, PySpark or Scala. Experience in Databricks on cloud. Exp in any of AWS, Azure or GCPe, ETL, data engineering, data cleansing and insertion into a data warehouse Must have Skills like Databricks, Cloud Data Architecture, Python Programming Language, Data Engineering. Professional Attributes:Excellent writing, communication and presentation skills. Eagerness to learn and develop self on an ongoing basis. Excellent client facing and interpersonal skills. Qualifications BE or MCA
Posted 2 months ago
2 - 7 years
4 - 9 Lacs
Gurugram
Work from Office
Management Consulting - Travel Find endless opportunities to solve the most pressing client needs and challenges, especially during the backdrop of a global pandemic as we adapt to the new normal. Practice: Industry Consulting, Capability Network I Areas of Work: Travel | Level: Analyst /Consultant/Manager | Location: Pune, Gurgaon, Mumbai, Bangalore, Chennai, Hyderabad, Kolkata | Years of Exp: 2-15 years Explore an Exciting Career at Accenture Are you an outcome-oriented problem solver? Do you enjoy working on transformation strategies for global clients? Does working in an inclusive and collaborative environment spark your interest? Then, Accenture Strategy and Consulting is the right place for you to explore limitless possibilities. As a part of the Travel Consulting practice , you will work with a team of leading practitioners across travel-aligned industries such as Hospitality, Aviation, Cargo, Airports, Cruise, and more . Our practitioners bring in the right blend of strategic, operational and industry experience to help the travel industry transform to win tomorrow's customer. As a part of the Capability Network team, you'll help drive the following: Identify and structure key issues into an issue-tree and set of hypotheses and plan and conduct research and analyses that address the most appropriate issues. Understand company's key strategic and operational issues spanning market definition, industry trends, and so on. Develop innovative, fact-based and achievable strategies and operating models after evaluation of multiple strategic options. Utilize industry knowledge to be able to make implementation-oriented recommendations. Develop business case elements to highlight the financial implication of recommendations. Identify industry specific operating metrics and link them to relevant financial metrics to identify the key value drivers for a client. Develop knowledge of prevailing trends , financials and operating drivers across multiple industry segments. Compile data conduct analyses of data to build a business case and model business scenarios. Leverage data and information to develop a well-structured, cohesive presentation ; apply strong PowerPoint/ document creation skills to structure and create client-ready deliverables. Demonstrate good judgement in how to engage with senior stakeholders regarding follow ups. Bring your best skills forward to excel in the role: Ability to conduct primary and secondary research to unearth facts and generate actionable insights Data-driven consulting and visualization skills - Ability to create data visualizations and derive analytical insights using PowerBI / Tableau / R / Python / Alteryx In-depth knowledge of digital offerings in Strategy, Marketing, Sales or Operations Familiarity with platforms such as Amadeus, Adobe Experience Manager, CRM, SAP S4 HANA Understanding of Big data , data integration, data ingestion and data quality. Understanding of at least one of the cloud platforms – Azure / AWS / Google cloud . An analytical mindset with strong business acumen Excellent communication and interpersonal skills Cross cultural competence with an ability to thrive in a dynamic environment Read more about us. Recent Blogs Your experience counts! MBA from Tier 1 or Tier 2 business school Strong travel industry experience in business consulting/ IT Consulting in any of the following sub-segments: Hospitality (Hotels/Cruises) Travel (Airlines/Cargo/Airport) Travel services (Tour & Travel Services/Rental Services) Experience in specific functional areas within the above industries: Airlines:Ticketing & Reservations, Pricing & Revenue Management, Procurement for Airline Operations, Ground Operations and Airlines Loyalty Management Airports:Airport Management Systems, Passenger experience management Hotels:Property management systems, Guest experience and Loyalty Management Air Cargo:Operations, Reservation Systems, Pricing and Revenue Management Travel Services:Travel Payments, Travel Insurance, Customer Experience Management Deep experience across multiple technology strategy areas such as:Digital & Technology Strategy, Operating Model Design, Sourcing Strategy, IT Shared Services Strategy, Process Transformation, Value case/ business case formulation, Agile ways of working and Enterprise Architecture. Experience in areas such as Forecasting, Classification and advanced statistical and AI/ML modelling using various approaches is desirable. Certifications in Agile methodology and/or IATA-approved Travel & Hospitality courses desired Project delivery experience with small globally diverse teams in direct client-facing roles will be great Fluency in spoken German or French or Japanese will be a major plus Qualifications What’s in it for you? An opportunity to work on transformative projects with key G2000 clients Potential to Co-create with leaders in strategy, industry experts, enterprise function practitioners and, business intelligence professionals to shape and recommend innovative solutions that leverage emerging technologies. Ability to embed responsible business into everything—from how you service your clients to how you operate as a responsible professional. Personalized training modules to develop your strategy & consulting acumen to grow your skills, industry knowledge and capabilities Opportunity to thrive in a culture that is committed to accelerate equality for all. Engage in boundaryless collaboration across the entire organization. About Accenture: Accenture is a leading global professional services company, providing a broad range of services and solutions in strategy, consulting, digital, technology and operations. Combining unmatched experience and specialized skills across more than 40 industries and all business functions — underpinned by the world’s largest delivery network — Accenture works at the intersection of business and technology to help clients improve their performance and create sustainable value for their stakeholders. With 569,000 people serving clients in more than 120 countries, Accenture drives innovation to improve the way the world works and lives. Visit us at www.accenture.com About Accenture Strategy & Consulting: Accenture Strategy shapes our clients’ future, combining deep business insight with the understanding of how technology will impact industry and business models. Our focus on issues such as digital disruption, redefining competitiveness, operating and business models as well as the workforce of the future helps our clients find future value and growth in a digital world. Today, digital is changing the way organizations engage with their employees, business partners, customers and communities. This is our unique differentiator. To bring this global perspective to our clients, Accenture Strategy's services include those provided by our Capability Network – a distributed management consulting organization that provides management consulting and strategy expertise across the client lifecycle. Our Capability Network teams complement our in-country teams to deliver cutting-edge expertise and measurable value to clients all around the world.For more information visit https://www.accenture.com/us-en/Careers/capability-network Accenture Capability Network | Accenture in One Word At the heart of every great change is a great human. If you have ideas, ingenuity and a passion for making a difference, come and be a part of our team.
Posted 2 months ago
4 - 8 years
5 - 8 Lacs
Gurugram
Work from Office
Job Title - GN- Travel-06 Management Level: 6-Senior Manager Location: Gurugram, DDC1A, NonSTPI Must-have skills: Travel Consulting Good to have skills: Ability to leverage design thinking, business process optimization, and stakeholder management skills. Job Summary : This role involves driving strategic initiatives, managing business transformations, and leveraging industry expertise to create value-driven solutions. Roles & Responsibilities: Provide strategic advisory services, conduct market research, and develop data-driven recommendations to enhance business performance. Explore an Exciting Career at Accenture Are you an outcome-oriented problem solver? Do you enjoy working on transformation strategies for global clients? Does working in an inclusive and collaborative environment spark your interest? Then, Accenture Strategy and Consulting is the right place for you to explore limitless possibilities. As a part of the Travel Consulting practice , you will work with a team of leading practitioners across travel-aligned industries such as Hospitality, Aviation, Cargo, Airports, Cruise, and more . Our practitioners bring in the right blend of strategic, operational and industry experience to help the travel industry transform to win tomorrow's customer. As a part of the Capability Network team, you'll help drive the following: Identify and structure key issues into an issue-tree and set of hypotheses and plan and conduct research and analyses that address the most appropriate issues. Understand company's key strategic and operational issues spanning market definition, industry trends, and so on. Develop innovative, fact-based and achievable strategies and operating models after evaluation of multiple strategic options. Utilize industry knowledge to be able to make implementation-oriented recommendations. Develop business case elements to highlight the financial implication of recommendations. Identify industry specific operating metrics and link them to relevant financial metrics to identify the key value drivers for a client. Develop knowledge of prevailing trends , financials and operating drivers across multiple industry segments. Compile data conduct analyses of data to build a business case and model business scenarios. Leverage data and information to develop a well-structured, cohesive presentation ; apply strong PowerPoint/ document creation skills to structure and create client-ready deliverables. Demonstrate good judgement in how to engage with senior stakeholders regarding follow ups. Bring your best skills forward to excel in the role: Ability to conduct primary and secondary research to unearth facts and generate actionable insights Data-driven consulting and visualization skills - Ability to create data visualizations and derive analytical insights using PowerBI / Tableau / R / Python / Alteryx In-depth knowledge of digital offerings in Strategy, Marketing, Sales or Operations Familiarity with platforms such as Amadeus, Adobe Experience Manager, CRM, SAP S4 HANA Understanding of Big data , data integration, data ingestion and data quality. Understanding of at least one of the cloud platforms – Azure / AWS / Google cloud . An analytical mindset with strong business acumen Excellent communication and interpersonal skills Cross cultural competence with an ability to thrive in a dynamic environment Professional & Technical Skills: - Relevant experience in the required domain. - Strong analytical, problem-solving, and communication skills. - Ability to work in a fast-paced, dynamic environment. Additional Information: - Opportunity to work on innovative projects. - Career growth and leadership exposure. About Our Company | Accenture Qualifications Experience: 14-16Years Educational Qualification: Any Degree
Posted 2 months ago
5 - 10 years
7 - 12 Lacs
Bengaluru
Work from Office
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Microsoft Purview Good to have skills : NA Minimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary : As a Data Engineer, you will design, develop, and maintain data solutions for data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across systems. You will play a crucial role in managing and optimizing data infrastructure to support the organization's data needs. Roles & Responsibilities: - Expected to be an SME, collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Design and develop scalable data pipelines for data ingestion, transformation, and loading. - Implement and maintain ETL processes to ensure efficient data migration and deployment. - Collaborate with cross-functional teams to understand data requirements and design appropriate solutions. - Ensure data quality and integrity by implementing data validation and cleansing techniques. - Optimize data infrastructure and performance to meet the organization's data processing needs. Professional & Technical Skills: - Must To Have Skills:Proficiency in Microsoft Purview. - Strong understanding of data engineering principles and best practices. - Experience with data modeling, database design, and SQL. - Hands-on experience with data integration tools and technologies. - Familiarity with cloud platforms such as Azure or AWS. - Good To Have Skills:Experience with data visualization tools such as Tableau or Power BI. Additional Information: - The candidate should have a minimum of 5 years of experience in Microsoft Purview. - This position is based at our Bengaluru office. - A 15 years full-time education is required. Qualifications 15 years full time education
Posted 2 months ago
1 - 6 years
8 - 13 Lacs
Pune
Work from Office
Cloud Observability Administrator JOB_DESCRIPTION.SHARE.HTML CAROUSEL_PARAGRAPH JOB_DESCRIPTION.SHARE.HTML Pune, India India Enterprise IT - 22685 about our diversity, equity, and inclusion efforts and the networks ZS supports to assist our ZSers in cultivating community spaces, obtaining the resources they need to thrive, and sharing the messages they are passionate about. Cloud Observability Administrator ZS is looking for a Cloud Observability Administrator to join our team in Pune. As a Cloud Observability Administrator, you will be working on configuration of various Observability tools and create solutions to address business problems across multiple client engagements. You will leverage information from requirements-gathering phase and utilize past experience to design a flexible and scalable solution; Collaborate with other team members (involved in the requirements gathering, testing, roll-out and operations phases) to ensure seamless transitions. What Youll Do: Deploying, managing, and operating scalable, highly available, and fault tolerant Splunk architecture. Onboarding various kinds of log sources like Windows/Linux/Firewalls/Network into Splunk. Developing alerts, dashboards and reports in Splunk. Writing complex SPL queries. Managing and administering a distributed Splunk architecture. Very good knowledge on configuration files used in Splunk for data ingestion and field extraction. Perform regular upgrades of Splunk and relevant Apps/add-ons. Possess a comprehensive understanding of AWS infrastructure, including EC2, EKS, VPC, CloudTrail, Lambda etc. Automation of manual tasks using Shell/PowerShell scripting. Knowledge of Python scripting is a plus. Good knowledge of Linux commands to manage administration of servers. What Youll Bring: 1+ years of experience in Splunk Development & Administration, Bachelor's Degree in CS, EE, or related discipline Strong analytic, problem solving, and programming ability 1-1.5 years of relevant consulting-industry experience working on medium-large scale technology solution delivery engagements; Strong verbal, written and team presentation communication skills Strong verbal and written communication skills with ability to articulate results and issues to internal and client teams Proven ability to work creatively and analytically in a problem-solving environment Ability to work within a virtual global team environment and contribute to the overall timely delivery of multiple projects Knowledge on Observability tools such as Cribl, Datadog, Pagerduty is a plus. Knowledge on AWS Prometheus and Grafana is a plus. Knowledge on APM concepts is a plus. Knowledge on Linux/Python scripting is a plus. Splunk Certification is a plus. Perks & Benefits ZS offers a comprehensive total rewards package including health and well-being, financial planning, annual leave, personal growth and professional development. Our robust skills development programs, multiple career progression options and internal mobility paths and collaborative culture empowers you to thrive as an individual and global team member. We are committed to giving our employees a flexible and connected way of working. A flexible and connected ZS allows us to combine work from home and on-site presence at clients/ZS offices for the majority of our week. The magic of ZS culture and innovation thrives in both planned and spontaneous face-to-face connections. Travel Travel is a requirement at ZS for client facing ZSers; business needs of your project and client are the priority. While some projects may be local, all client-facing ZSers should be prepared to travel as needed. Travel provides opportunities to strengthen client relationships, gain diverse experiences, and enhance professional growth by working in different environments and cultures. Considering applying? At ZS, we're building a diverse and inclusive company where people bring their passions to inspire life-changing impact and deliver better outcomes for all. We are most interested in finding the best candidate for the job and recognize the value that candidates with all backgrounds, including non-traditional ones, bring. If you are interested in joining us, we encourage you to apply even if you don't meet 100% of the requirements listed above. ZS is an equal opportunity employer and is committed to providing equal employment and advancement opportunities without regard to any class protected by applicable law. To Complete Your Application Candidates must possess or be able to obtain work authorization for their intended country of employment.An on-line application, including a full set of transcripts (official or unofficial), is required to be considered. NO AGENCY CALLS, PLEASE. Find Out More At
Posted 2 months ago
2 - 5 years
14 - 17 Lacs
Mumbai
Work from Office
Who you are A seasoned Data Engineer with a passion for building and managing data pipelines in large-scale environments. Have good experience working with big data technologies, data integration frameworks, and cloud-based data platforms. Have a strong foundation in Apache Spark, PySpark, Kafka, and SQL.What you’ll doAs a Data Engineer – Data Platform Services, your responsibilities include: Data Ingestion & Processing Assisting in building and optimizing data pipelines for structured and unstructured data. Working with Kafka and Apache Spark to manage real-time and batch data ingestion. Supporting data integration using IBM CDC and Universal Data Mover (UDM). Big Data & Data Lakehouse Management Managing and processing large datasets using PySpark and Iceberg tables. Assisting in migrating data workloads from IIAS to Cloudera Data Lake. Supporting data lineage tracking and metadata management for compliance. Optimization & Performance Tuning Helping to optimize PySpark jobs for efficiency and scalability. Supporting data partitioning, indexing, and caching strategies. Monitoring and troubleshooting pipeline issues and performance bottlenecks. Security & Compliance Implementing role-based access controls (RBAC) and encryption policies. Supporting data security and compliance efforts using Thales CipherTrust. Ensuring data governance best practices are followed. Collaboration & Automation Working with Data Scientists, Analysts, and DevOps teams to enable seamless data access. Assisting in automation of data workflows using Apache Airflow. Supporting Denodo-based data virtualization for efficient data access. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 4-7 years of experience in big data engineering, data integration, and distributed computing. Strong skills in Apache Spark, PySpark, Kafka, SQL, and Cloudera Data Platform (CDP). Proficiency in Python or Scala for data processing. Experience with data pipeline orchestration tools (Apache Airflow, Stonebranch UDM). Understanding of data security, encryption, and compliance frameworks. Preferred technical and professional experience Experience in banking or financial services data platforms. Exposure to Denodo for data virtualization and DGraph for graph-based insights. Familiarity with cloud data platforms (AWS, Azure, GCP). Certifications in Cloudera Data Engineering, IBM Data Engineering, or AWS Data Analytics.
Posted 2 months ago
2 - 5 years
14 - 17 Lacs
Mumbai
Work from Office
Who you are: A Data Engineer specializing in enterprise data platforms, experienced in building, managing, and optimizing data pipelines for large-scale environments. Having expertise in big data technologies, distributed computing, data ingestion, and transformation frameworks. Proficient in Apache Spark, PySpark, Kafka, and Iceberg tables, and understand how to design and implement scalable, high-performance data processing solutions.What you’ll doAs a Data Engineer – Data Platform Services, responsibilities include: Data Ingestion & Processing Designing and developing data pipelines to migrate workloads from IIAS to Cloudera Data Lake. Implementing streaming and batch data ingestion frameworks using Kafka, Apache Spark (PySpark). Working with IBM CDC and Universal Data Mover to manage data replication and movement. Big Data & Data Lakehouse Management Implementing Apache Iceberg tables for efficient data storage and retrieval. Managing distributed data processing with Cloudera Data Platform (CDP). Ensuring data lineage, cataloging, and governance for compliance with Bank/regulatory policies. Optimization & Performance Tuning Optimizing Spark and PySpark jobs for performance and scalability. Implementing data partitioning, indexing, and caching to enhance query performance. Monitoring and troubleshooting pipeline failures and performance bottlenecks. Security & Compliance Ensuring secure data access, encryption, and masking using Thales CipherTrust. Implementing role-based access controls (RBAC) and data governance policies. Supporting metadata management and data quality initiatives. Collaboration & Automation Working closely with Data Scientists, Analysts, and DevOps teams to integrate data solutions. Automating data workflows using Airflow and implementing CI/CD pipelines with GitLab and Sonatype Nexus. Supporting Denodo-based data virtualization for seamless data access. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 4-7 years of experience in big data engineering, data integration, and distributed computing. Strong skills in Apache Spark, PySpark, Kafka, SQL, and Cloudera Data Platform (CDP). Proficiency in Python or Scala for data processing. Experience with data pipeline orchestration tools (Apache Airflow, Stonebranch UDM). Understanding of data security, encryption, and compliance frameworks. Preferred technical and professional experience Experience in banking or financial services data platforms. Exposure to Denodo for data virtualization and DGraph for graph-based insights. Familiarity with cloud data platforms (AWS, Azure, GCP). Certifications in Cloudera Data Engineering, IBM Data Engineering, or AWS Data Analytics.
Posted 2 months ago
2 - 5 years
14 - 17 Lacs
Mumbai
Work from Office
Who you areA Data Engineer specializing in enterprise data platforms, experienced in building, managing, and optimizing data pipelines for large-scale environments. Having expertise in big data technologies, distributed computing, data ingestion, and transformation frameworks. Proficient in Apache Spark, PySpark, Kafka, and Iceberg tables, and understand how to design and implement scalable, high-performance data processing solutions.What you’ll doAs a Data Engineer – Data Platform Services, responsibilities include: Data Ingestion & Processing Designing and developing data pipelines to migrate workloads from IIAS to Cloudera Data Lake. Implementing streaming and batch data ingestion frameworks using Kafka, Apache Spark (PySpark). Working with IBM CDC and Universal Data Mover to manage data replication and movement. Big Data & Data Lakehouse Management Implementing Apache Iceberg tables for efficient data storage and retrieval. Managing distributed data processing with Cloudera Data Platform (CDP). Ensuring data lineage, cataloging, and governance for compliance with Bank/regulatory policies. Optimization & Performance Tuning Optimizing Spark and PySpark jobs for performance and scalability. Implementing data partitioning, indexing, and caching to enhance query performance. Monitoring and troubleshooting pipeline failures and performance bottlenecks. Security & Compliance Ensuring secure data access, encryption, and masking using Thales CipherTrust. Implementing role-based access controls (RBAC) and data governance policies. Supporting metadata management and data quality initiatives. Collaboration & Automation Working closely with Data Scientists, Analysts, and DevOps teams to integrate data solutions. Automating data workflows using Airflow and implementing CI/CD pipelines with GitLab and Sonatype Nexus. Supporting Denodo-based data virtualization for seamless data access. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 4-7 years of experience in big data engineering, data integration, and distributed computing. Strong skills in Apache Spark, PySpark, Kafka, SQL, and Cloudera Data Platform (CDP). Proficiency in Python or Scala for data processing. Experience with data pipeline orchestration tools (Apache Airflow, Stonebranch UDM). Understanding of data security, encryption, and compliance frameworks. Preferred technical and professional experience Experience in banking or financial services data platforms. Exposure to Denodo for data virtualization and DGraph for graph-based insights. Familiarity with cloud data platforms (AWS, Azure, GCP). Certifications in Cloudera Data Engineering, IBM Data Engineering, or AWS Data Analytics..
Posted 2 months ago
6 - 10 years
14 - 17 Lacs
Mumbai
Work from Office
A Data Engineer specializing in enterprise data platforms, experienced in building, managing, and optimizing data pipelines for large-scale environments. Having expertise in big data technologies, distributed computing, data ingestion, and transformation frameworks. Proficient in Apache Spark, PySpark, Kafka, and Iceberg tables, and understand how to design and implement scalable, high-performance data processing solutions.What you’ll doAs a Data Engineer – Data Platform Services, responsibilities include: Data Ingestion & Processing Designing and developing data pipelines to migrate workloads from IIAS to Cloudera Data Lake. Implementing streaming and batch data ingestion frameworks using Kafka, Apache Spark (PySpark). Working with IBM CDC and Universal Data Mover to manage data replication and movement. Big Data & Data Lakehouse Management Implementing Apache Iceberg tables for efficient data storage and retrieval. Managing distributed data processing with Cloudera Data Platform (CDP). Ensuring data lineage, cataloging, and governance for compliance with Bank/regulatory policies. Optimization & Performance Tuning Optimizing Spark and PySpark jobs for performance and scalability. Implementing data partitioning, indexing, and caching to enhance query performance. Monitoring and troubleshooting pipeline failures and performance bottlenecks. Security & Compliance Ensuring secure data access, encryption, and masking using Thales CipherTrust. Implementing role-based access controls (RBAC) and data governance policies. Supporting metadata management and data quality initiatives. Collaboration & Automation Working closely with Data Scientists, Analysts, and DevOps teams to integrate data solutions. Automating data workflows using Airflow and implementing CI/CD pipelines with GitLab and Sonatype Nexus. Supporting Denodo-based data virtualization for seamless data access. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 6-10 years of experience in big data engineering, data processing, and distributed computing. Proficiency in Apache Spark, PySpark, Kafka, Iceberg, and Cloudera Data Platform (CDP). Strong programming skills in Python, Scala, and SQL. Experience with data pipeline orchestration tools (Apache Airflow, Stonebranch UDM). Knowledge of data security, encryption, and compliance frameworks. Experience working with metadata management and data quality solutions. Preferred technical and professional experience Experience with data migration projects in the banking/financial sector. Knowledge of graph databases (DGraph Enterprise) and data virtualization (Denodo). Exposure to cloud-based data platforms (AWS, Azure, GCP). Familiarity with MLOps integration for AI-driven data processing. Certifications in Cloudera Data Engineering, IBM Data Engineering, or AWS Data Analytics. Architectural review and recommendations on the migration/transformation solutions. Experience working with Banking Data model. “Meghdoot” Cloud platform knowledge.
Posted 2 months ago
2 - 7 years
14 - 17 Lacs
Mumbai
Work from Office
What you’ll doAs a Data Engineer – Data Platform Services, you will be responsible for: Data Migration & Modernization Leading the migration of ETL workflows from IBM DataStage to PySpark, ensuring performance optimization and cost efficiency. Designing and implementing data ingestion frameworks using Kafka and PySpark, replacing legacy ETL Pipeline using DataStage. Migrating the analytical platform from IBM Integrated Analytics System (IIAS) to Cloudera Data Lake on CDP. Data Engineering & Pipeline Development Developing and maintaining scalable, fault-tolerant, and optimized data pipelines on Cloudera Data Platform. Implementing data transformations, enrichment, and quality checks to ensure accuracy and reliability. Leveraging Denodo for data virtualization and enabling seamless access to distributed datasets. Performance Tuning & Optimization Optimizing PySpark jobs for efficiency, scalability, and reduced cost on Cloudera. Fine-tuning query performance on Iceberg tables and ensuring efficient data storage and retrieval. Collaborating with Cloudera ML engineers to integrate machine learning workloads into data pipelines. Security & Compliance Implementing Thales CipherTrust encryption and tokenization mechanisms for secure data processing. Ensuring compliance with Bank/regulatory body security guidelines, data governance policies, and best practices. Collaboration & Leadership Working closely with business stakeholders, architects, and data scientists to align solutions with business goals. Leading and mentoring junior data engineers, conducting code reviews, and promoting best practices. Collaborating with DevOps teams to streamline CI/CD pipelines, using GitLab and Nexus Repository for efficient deployments. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 12+ years of experience in Data Engineering, ETL, and Data Platform Modernization. Hands-on experience in IBM DataStage and PySpark, with a track record of migrating legacy ETL workloads. Expertise in Apache Iceberg, Cloudera Data Platform, and Big-data processing frameworks. Strong knowledge of Kafka, Airflow, and cloud-native data processing solutions. Experience with Denodo for data virtualization and Talend DQ for data quality. Proficiency in SQL, NoSQL, and Graph DBs (DGraph Enterprise). Strong understanding of data security, encryption, and compliance standards (Thales CipherTrust). Experience with DevOps, CI/CD pipelines, GitLab, and Sonatype Nexus Repository. Excellent problem-solving, analytical, and communication skills. Preferred technical and professional experience Experience with Cloudera migration projects in Banking or financial domains. Experience working with Banking Data model. Knowledge of Cloudera ML, Qlik Sense/Tableau reporting, and integration with data lakes. Hands-on experience with QuerySurge for automated data testing. Understanding of code quality and security best practices using CheckMarx. IBM, Cloudera, or AWS/GCP certifications in Data Engineering, Cloud, or Security. “Meghdoot” Cloud platform knowledge. Architectural designing and recommendations the best possible solutions.
Posted 2 months ago
2 - 7 years
8 - 17 Lacs
Pune, Bengaluru, Delhi / NCR
Hybrid
Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients. Powered by our purpose the relentless pursuit of a world that works better for people we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of Adobe Experience Platform (AEP) CDP We are looking for a motivated and detail-oriented Adobe Experience Platform (AEP) Developer to join our team. The ideal candidate will assist in the development, integration, and optimization of AEP solutions, working closely with senior developers and architects.. Responsibilities – Architecture & Implementation: Assist in the implementation of scalable and secure AEP architecture. Data Integration: Support the integration of various data sources to create unified customer profiles. Customer Identity Management: Manage the identity resolution process for enriched customer insights. Stakeholder Collaboration: Work with marketing, analytics, and IT teams to align AEP solutions with business goals for customer segmentation and personalization. Compliance: Ensure compliance with data privacy regulations (GDPR, CCPA). Martech Integration: Assist in integrating AEP CDP with other Adobe tools and the marketing technology stack. Performance Optimization: Monitor AEP platform usage and assist in performance optimization tasks. Qualifications we seek in you! Minimum Qualifications / Skills 5+ years in marketing technology, DMP/CDP implementation, and team leadership. Good experience with Adobe Experience Platform, customer data models, and data governance. Strong skills in integrating data for segmentation and personalization, and familiarity with Adobe Experience Cloud. Knowledge of data privacy regulations and cloud platforms (AWS, GCP, Azure). Strong communication and stakeholder management abilities. Bachelor’s degree in a related field; Master’s preferred Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values diversity and inclusion, respect and integrity, customer focus, and innovation. Get to know us at genpact.com and on LinkedIn , X , YouTube , and Facebook . Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training.
Posted 2 months ago
4 - 6 years
14 - 22 Lacs
Chennai
Work from Office
Maintain metadata, data dictionaries, and lineage documentation. Build and maintain scalable data pipelines and ETL processes for BI needs. Optimize data models and ensure clean, reliable, and well-structured data for Power BI. Integrate data Required Candidate profile Strong SQL development , experience with Power BI dataset structuring and integration. Exp in Python for ETL, automation, APIs and scripting for data ingestion. Azure, SSIS, cloud-based data services
Posted 2 months ago
8 - 13 years
25 - 30 Lacs
Bengaluru
Hybrid
Over all 8+ years of solid experience in data projects. Excellent Design, develop, and maintain robust ETL/ELT pipelines for data ingestion, transformation, and storage. Proficient in SQL and must worked on complex joins, Subqueries, functions, procedure Able to perform SQL tunning and query optimization without support. Design, develop, and maintain ETL pipelines using Databricks, PySpark to extract, transform, and load data from various sources. Must have good working experience on Delta tables, deduplication, merging with terabyte of data set Optimize and fine-tune existing ETL workflows for performance and scalability. Excellent knowledge in dimensional modelling and Data Warehouse Must have experience on working with large data set Experience working with batch and real-time data processing (Good to have). Implemented data validation, quality checks , and ensure adherence to security and compliance standards. Ability to develop reliable, secure, compliant data processing systems. Work closely with cross-functional teams to support data analytics, reporting, and business intelligence initiatives. One should be self-driven and work independently without support.
Posted 2 months ago
13 - 20 years
45 - 50 Lacs
Pune
Work from Office
About The Role : Job Title Solution Architect, VP Location Pune, India Role Description As a solution architect supporting the individual business aligned, strategic or regulatory book of work, you will be working closely with the projects' business stakeholders, business analysts, data solution architects, engineers, the lead solution architect, the principal architect and the Chief Architect to help projects break out business requirements into implementable building blocks and design the solution's target architecture. It is the solution architect's responsibility to align the design against the general (enterprise) architecture principles, apply agreed best practices and patterns and help the engineering teams deliver against the architecture in an event driven, service oriented environment. What we'll offer you As part of our flexible scheme, here are just some of the benefits that youll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your key responsibilities In projects work with SMEs and stakeholders deriving the individual components of the solution Design the target architecture for a new solution or when adding new capabilities to an existing solution. Assure proper documentation of the High-Level Design and Low Level design of the delivered solution Quality assures the delivery against the agreed and approved architecture, i.e. provide delivery guidance and governance Prepare the High-Level Design for review and approval by design authorities for projects to proceed into implementation Support creation of the Low-Level Design as it is being delivered to support final go live Your skills and experience Very proficient at designing / architecting solutions in an event driven environment leveraging service oriented principles Proficient at Java and the delivery of Spring based services Proficient at building systems in a decoupled, event driven environment leveraging messaging / streaming, i.e Kafka Very good experience designing and implementing data streaming and data ingest pipelines for heavy throughput while providing different guarantees (At least once, Exactly once) Very good understanding of non-streaming ETL and ELT approaches for data ingest Solid understanding of containerized, distributed systems and building auto-scalable, state-less services in a cluster (Concepts of Quorum, Consensus) Solid understanding of standard RDBM systems and a proficiency at data engineering level in T-SQL and/or PL/SQL and/or PL/pgSQL, preferably all Good understanding of how RBMS generally work but also specific tuning experience on SQL Server, Oracle or PostgreSQL are welcome Understanding of modeling / implementing different data modelling approaches as well as understanding of respective pros and cons (e.g. Normalized, Denormalized, Star, Snowflake, DataVault 2.0, ...) A strong working experience on GCP architecture is a benefit (APIGee, BigQuery, Pub/Sub, Cloud SQL, DataFlow, ...) - appropriate GCP architecture level certification even more so Experience leveraging other languages more than welcome (C#, Python) How we'll support you Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs About us and our teams Please visit our company website for further information: https://www.db.com/company/company.htm
Posted 2 months ago
4 - 6 years
15 - 25 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
Warm Greetings from SP Staffing!! Role :Pyspark Developer Experience Required :4 to 6 yrs Work Location: Hyderabad/Bangalore/Pune/Chennai/Kochi Required Skills, pyspark/python/spark sql/ETL Interested candidates can send resumes to nandhini.spstaffing@gmail.com
Posted 2 months ago
2 - 7 years
4 - 8 Lacs
Chennai
Work from Office
Overview Java development with hands-on experience in Spring Boot. Strong knowledge of UI frameworks, particularly Angular, for developing dynamic, interactive web applications. Experience with Kubernetes for managing microservices-based applications in a cloud environment. Familiarity with Postgres (relational) and Neo4j (graph database) for managing complex data models. Experience in Meta Data Modeling and designing data structures that support high-performance and scalability. Expertise in Camunda BPMN and business process automation. Experience implementing rules with Drools Rules Engine. Knowledge of Unix/Linux systems for application deployment and management. Experience building data Ingestion Frameworks to process and handle large datasets. Responsibilities Java development with hands-on experience in Spring Boot. Strong knowledge of UI frameworks, particularly Angular, for developing dynamic, interactive web applications. Experience with Kubernetes for managing microservices-based applications in a cloud environment. Familiarity with Postgres (relational) and Neo4j (graph database) for managing complex data models. Experience in Meta Data Modeling and designing data structures that support high-performance and scalability. Expertise in Camunda BPMN and business process automation. Experience implementing rules with Drools Rules Engine. Knowledge of Unix/Linux systems for application deployment and management. Experience building data Ingestion Frameworks to process and handle large datasets. Requirements Java development with hands-on experience in Spring Boot. Strong knowledge of UI frameworks, particularly Angular, for developing dynamic, interactive web applications. Experience with Kubernetes for managing microservices-based applications in a cloud environment. Familiarity with Postgres (relational) and Neo4j (graph database) for managing complex data models. Experience in Meta Data Modeling and designing data structures that support high-performance and scalability. Expertise in Camunda BPMN and business process automation. Experience implementing rules with Drools Rules Engine. Knowledge of Unix/Linux systems for application deployment and management. Experience building data Ingestion Frameworks to process and handle large datasets.
Posted 2 months ago
8 - 13 years
10 - 20 Lacs
Bengaluru
Work from Office
Hi, Greetings from Sun Technology Integrators!! This is regarding a job opening with Sun Technology Integrators, Bangalore. Please find below the job description for your reference. Kindly let me know your interest and share your updated CV to nandinis@suntechnologies.com with the below details ASAP. C.CTC- E.CTC- Notice Period- Current location- Are you serving Notice period/immediate- Exp in Snowflake- Exp in Matillion- 2:00PM-11:00PM-shift timings (free cab facility-drop) +food Please let me know, if any of your friends are looking for a job change. Kindly share the references. Only Serving/ Immediate candidates can apply. Interview Process-1 Round(Virtual)+Final Round(F2F) Please Note: WFO-Work From Office (No hybrid or Work From Home) Mandatory skills : Snowflake, SQL, ETL, Data Ingestion, Data Modeling, Data Warehouse,Python, Matillion, AWS S3, EC2 Preferred skills : SSIR, SSIS, Informatica, Shell Scripting Venue Details: Sun Technology Integrators Pvt Ltd No. 496, 4th Block, 1st Stage HBR Layout (a stop ahead from Nagawara towards to K. R. Puram) Bangalore 560043 Company URL: www.suntechnologies.com Thanks and Regards,Nandini S | Sr.Technical Recruiter Sun Technology Integrators Pvt. Ltd. nandinis@suntechnologies.com www.suntechnologies.com
Posted 2 months ago
3 - 8 years
10 - 20 Lacs
Bengaluru
Work from Office
Hi, Greetings from Sun Technology Integrators!! This is regarding a job opening with Sun Technology Integrators, Bangalore. Please find below the job description for your reference. Kindly let me know your interest and share your updated CV to nandinis@suntechnologies.com with the below details ASAP. C.CTC- E.CTC- Notice Period- Current location- Are you serving Notice period/immediate- Exp in Snowflake- Exp in Matillion- 2:00PM-11:00PM-shift timings (free cab facility-drop) +food Please let me know, if any of your friends are looking for a job change. Kindly share the references. Only Serving/ Immediate candidates can apply. Interview Process-2 Rounds(Virtual)+Final Round(F2F) Please Note: WFO-Work From Office (No hybrid or Work From Home) Mandatory skills : Snowflake, SQL, ETL, Data Ingestion, Data Modeling, Data Warehouse,Python, Matillion, AWS S3, EC2 Preferred skills : SSIR, SSIS, Informatica, Shell Scripting Venue Details: Sun Technology Integrators Pvt Ltd No. 496, 4th Block, 1st Stage HBR Layout (a stop ahead from Nagawara towards to K. R. Puram) Bangalore 560043 Company URL: www.suntechnologies.com Thanks and Regards,Nandini S | Sr.Technical Recruiter Sun Technology Integrators Pvt. Ltd. nandinis@suntechnologies.com www.suntechnologies.com
Posted 2 months ago
2 - 5 years
14 - 17 Lacs
Hyderabad
Work from Office
As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scala ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations.. Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala"
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough