Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
15.0 - 20.0 years
4 - 8 Lacs
Chennai
Work from Office
About The Role Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Snowflake Data Warehouse, PySpark, Core Banking Good to have skills : AWS BigDataMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver effective solutions that meet business needs, while also troubleshooting any issues that arise in the data processing workflow. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge in data engineering.- Continuously evaluate and improve data processing workflows to enhance efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in Snowflake Data Warehouse, Core Banking, PySpark.- Good To Have Skills: Experience with AWS BigData.- Strong understanding of data modeling and database design principles.- Experience with data integration tools and ETL processes.- Familiarity with cloud-based data storage solutions and architectures. Additional Information:- The candidate should have minimum 5 years of experience in Snowflake Data Warehouse.- This position is based in Chennai.- A 15 years full time education is required. Qualification 15 years full time education
Posted 3 weeks ago
5.0 - 7.0 years
14 - 20 Lacs
Pune
Hybrid
So, what’s the role all about? The Prompt Engineer optimizes prompts to generative AI models across NiCE's Illuminate applications. As part of the Illuminate Research team, the Prompt Engineer works with several groups in the business to help our applications deliver the highest quality customer experience. The Prompt Engineer partners with global development teams to help diagnose and resolve prompt-based issues. This includes helping to define and execute tests for LLM-based systems that are difficult to evaluate with traditional test automation tools. The Prompt Engineer also helps educate the development teams on advances in prompt engineering and helps update production prompts to evolving industry best practices. How will you make an impact? Regularly review production metrics and specific problem cases to find opportunities for improvement. Help diagnose and resolve issues with production prompts in English. Refine prompts to generative AI systems to achieve customer goals. Collect and present quantitative analysis on solution success. Work with application developers to implement new production monitoring tools and metrics. Work with architects and Product Managers to implement prompts to support new features. Meet regularly with teams working in United States Mountain and Pacific time zones (UTC-7:00 and UTC-8:00). Review new prompts and prompt changes with Machine Learning Engineers. Consult with Machine Learning Engineers for more challenging problems. Stay informed about new advances in prompt engineering. Have you got what it takes? Fluent in written and spoken English. BS in technology-related field such as computer science, business intelligence/analytics, or finance. 5- 7 years' work experience in a technology-related industry or position. Familiarity with best practices in prompt engineering, to include differences in prompts between major LLM vendors. Ability to develop and maintain good working relationships with cross-functional teams. Ability to clearly communicate and present to internal and external stakeholders. Experience with Python and at least one web app framework for prototyping, e.g., Streamlit or Flask. What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NiCE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NiCEr! Enjoy NiCE-FLEX! At NiCE, we work according to the NiCE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 7815 Reporting into: Tech Manager Role Type: Individual Contributor
Posted 3 weeks ago
0 years
0 Lacs
Pune/Pimpri-Chinchwad Area
On-site
About the Role This role is accountable for running day to day operations of Data Platform in Azure / AWS Databricks. Data Engineer is accountable for ongoing Development, Enhancement support and maintenance data availability and data quality, performance enhancement and stability of the system. 1. Designing and implementing data ingestion pipelines from multiple sources using Azure Databricks 2. Ensure data pipelines run smoothly and efficiently 3. Adherence to security, regulatory and audit control guidelines 4. Drive optimization, continuous improvement and efficiency Essential for this role Minimum 5yrs of experience in data analytics field Experience with Azure/AWS Databricks Experience in building and optimizing data pipelines, architectures and data sets Excellent experience in Scala or Python, PySpark and SQL Ability to troubleshoot and optimize complex queries on the Spark platform Knowledgeable on structured and unstructured data design / modelling, data access and data storage techniques Expertise in designing and deploying data applications on cloud solutions, such as Azure or AWS Hands on experience in performance tuning and optimizing code running in Databricks environment Demonstrated analytical and problem-solving skills particularly those that apply to a big data environment Technical / Professional Skills Please provide at least 3 Azure/AWS Databricks. Python / Scala / Spark / PySpark. HIVE / HBase / Impala / Parquet. Sqoop, Kafka, Flume. SQL and RDBMS. Airflow. Jenkins / Bamboo. Github / Bitbucket. Nexus.
Posted 3 weeks ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Title: AWS Data Engineer Location: Pune, Jaipur, Bengaluru, Hyderabad, Noida Duration: Fulltime Positions: Multiple Responsibilities: • Defines, designs, develops and test software components/applications using AWS - (Databricks on AWS, AWS Glue, Amazon S3, AWS Lambda, Amazon Redshift, AWS Secrets Manager) • Strong SQL skills with experience. • Experience handling Structured and unstructured datasets. • Experience in Data Modeling and Advanced SQL techniques. • Experience implementing AWS Glue, Airflow, or any other data orchestration tool using latest technologies and techniques. • Good exposure in Application Development. • The candidate should work independently with minimal supervision. Must Have: • Hands-on experience with distributed computing framework like Databricks, Spark-Ecosystem (Spark Core, PySpark, Spark Streaming, SparkSQL) • Willing to work with product teams to best optimize product features/functions. • Experience on Batch workloads and real-time streaming with high volume data frequency • Performance optimization on Spark workloads • Environment setup, user management, Authentication and cluster management on Databricks • Professional curiosity and the ability to enable yourself in new technologies and tasks. • Good understanding of SQL and a good grasp of relational and analytical database management theory and practice. Good To Have: • Hands-on experience with distributed computing framework like Databricks. • Experience with Databricks migration from On-premise to Cloud OR Cloud to Cloud • Migration of ETL workloads from Apache Spark implementations to Databricks • Experience on Databricks ML will be a plus • Migration from Spark 2.0 to Spark 3.5 Key Skills: • Python, SQL and PySpark • Big Data Ecosystem (Hadoop, Hive, Sqoop, HDFS, HBase) • Spark Ecosystem (Spark Core, Spark Streaming, Spark SQL) / Databricks • AWS (AWS Glue, Databricks on AWS, Lambda, Amazon Redshift, Amazon S3, AWS Secrets Manager) • Data Modelling, ETL Methodology
Posted 3 weeks ago
6.0 - 8.0 years
8 - 10 Lacs
Chennai
Work from Office
We are seeking a highly experienced Big Data Lead with strong expertise in Apache Spark, Spark SQL, and Spark Streaming The ideal candidate should have extensive hands-on experience with the Hadoop ecosystem, a solid grasp of multiple programming languages including Java, Scala, and Python, and a proven ability to design and implement data processing pipelines in distributed environments Roles & Responsibilities Lead design and development of scalable data processing pipelines using Apache Spark , Spark SQL , and Spark Streaming Work with Java , Scala , and Python to implement big data solutions Design efficient data ingestion pipelines leveraging Sqoop , Kafka , HDFS , and MapReduce Optimize and troubleshoot Spark jobs for performance and reliability Interface with relational databases ( Oracle , MySQL , SQL Server ) and NoSQL databases Work within Unix/Linux environments, employing tools like Git , Jenkins , and CI/CD pipelines Collaborate with cross-functional teams to ensure delivery of robust big data solutions Ensure code quality through unit testing , BDD/TDD practices , and automated testing frameworks Competencies Required 6+ years of hands-on experience in Apache Spark , Spark SQL , and Spark Streaming Strong proficiency in Java , Scala , and Python as applied to Spark applications In-depth experience with the Hadoop ecosystem : HDFS , MapReduce , Hive , HBase , Sqoop , and Kafka Proficiency in working with both relational and NoSQL databases Hands-on experience with build and automation tools like Maven , Gradle , Jenkins , and version control systems like Git Experience working in Linux/Unix environments and developing RESTful services Familiarity with modern testing methodologies including unit testing , BDD , and TDD
Posted 3 weeks ago
11.0 - 17.0 years
45 - 50 Lacs
Pune
Work from Office
: Job Title: Fintech Product Engineering Lead Corporate Title: VP Location: Pune, India Role Description Engineer is responsible for managing or performing work across multiple areas of the bank's overall IT Platform/Infrastructure including analysis, development, and administration. It may also involve taking functional oversight of engineering delivery for specific departments. Work includes: Planning and developing entire engineering solutions to accomplish business goals Building reliability and resiliency into solutions with appropriate testing and reviewing throughout the delivery lifecycle Ensuring maintainability and reusability of engineering solutions Ensuring solutions are well architected and can be integrated successfully into the end-to-end business process flow Reviewing engineering plans and quality to drive re-use and improve engineering capability Participating in industry forums to drive adoption of innovative technologies, tools and solutions in the Bank. Deutsche Banks Corporate Bank division is a leading provider of cash management, trade finance and securities finance. We complete green-field projects that deliver the best Corporate Bank - Securities Services products in the world. Our team is diverse, international, and driven by shared focus on clean code and valued delivery. At every level, agile minds are rewarded with competitive pay, support, and opportunities to excel. You will work as part of a cross-functional agile delivery team. You will bring an innovative approach to software development, focusing on using the latest technologies and practices, as part of a relentless focus on business value. You will be someone who sees engineering as team activity, with a predisposition to open code, open discussion and creating a supportive, collaborative environment. You will be ready to contribute to all stages of software delivery, from initial analysis right through to production support. What well offer you 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Accident and Term life Insurance Your Key Responsibilities: Ability to navigate a strong sense of urgency while maintaining focus and clarity. Skilled at solving complex design challenges independently , without needing oversight. Proven track record of quickly delivering high-quality code and features . Able to inspire and energise teams through urgency, ownership, and technical excellence. Willing to do whatever it takes to ensure product success , from strategy to hands-on execution. Deep experience in architecting scalable systems (HLD & LLD) in fast-paced environments. Comfortable leading through ambiguity, change, and high-growth pressure . Known for balancing speed with engineering quality and operational readiness . Strong communicator can align teams, resolve conflicts, and drive decisions fast. A true builder mindset acts with ownership, speed, and high accountability . Your skills and experience Hands-on experience in building responsive UIs withReact and Javascript. Hands-on knowledge ofGo(Golang)/ Java and GIN/ SpringBoot framework for backend development. Proficient in HTML, CSS and styling tools like Tailwind. Proficient inRESTful, GraphQLandgRPCfor building scalable and high-performance APIs. Experience with GCP/AWS, for building scalable, resilient micro-service based architectures. Experience with relational and NoSQL databases (e.g.,PostgreSQL,MySQL,Firestore,BigTable). Experience with logging, monitoring and alerting using ( egGrafana, Prometheus, ELK ) Familiarity with CI/CD pipelines, automated testing and deployment strategies with detailed knowledge on Terrafom. Knowledge of best practices for building secure applications (e.g., mTLS, Encryption, OAuth, JWT and Data Compliance). Knowledge of disaster recovery, zero-downtime deploys, and backup strategies How well support you
Posted 3 weeks ago
4.0 - 9.0 years
10 - 14 Lacs
Pune
Work from Office
: Job TitleStrategic Data Archive Onboarding Engineer, AS LocationPune, India Role Description Strategic Data Archive is an internal service which enables application to implement records management for regulatory requirements, application decommissioning, and application optimization. You will work closely with other teams providing hands on support onboarding by helping them define record content and metadata, configuring archiving, supporting testing and creating defensible documentation that archiving was complete. You will need to both support and manage the expectations of demanding internal clients. What well offer you , 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Accident and Term life Insurance Your key responsibilities Provide responsive customer service helping internal clients understand and efficiently manage their records management risks Explain our archiving services (both the business value and technical implementation) and respond promptly to inquiries Support the documentation and approval of requirements including record content and metadata Identify and facilitate implementing an efficient solution to meet the requirements Manage expectations and provide regular updates- frequently to senior stakeholders Configure archiving in test environments- will not be coding new functionality but will be making configuration changes maintained in a code repository and deployed with standard tools Support testing ensuring clients have appropriately managed implementation risks Help issue resolution including data issues, environment challenges, and code bugs Promote configurations from test environments to production Work with Production Support to ensure archiving is completed and evidenced Contribute towards a culture of learning and continuous improvement Will partner with teams in multiple location Your skills and experience Delivers against tight deadlines in a fast paced environment Manages others expectations and meets commitments High degree of accuracy and attention to detail Ability to communicate (written and verbal) concisely both business concepts and technical details and to influence partners including senior mangers High analytical capabilities and able to quickly grasp new contexts we support multiple areas of the Bank Expresses opinions while supporting group decisions Ensures deliverables are clearly documented and holds self and others accountable for meeting those deliverables Ability to identify risks at an early stage and implement mitigating strategies Flexibility and willingness to work autonomously and collaboratively Ability to work in virtual teams, agile environment and in matrixed organizations Treats everyone with respect and embraces diversity Bachelors Degree from an accredited college or university desirable Minimum 4 years experience implementing IT solutions in a global financial institution Comfortable with technology (e.g., SQL, FTP, XML, JSON) and a desire and ability to learn new skills as required (e.g., Fabric, Kubernetes, Kafka, Avro, Ansible) Must be an expert in SQL and have Python programming experience. Financial markets and Google Cloud Platform knowledge a plus while curiosity a requirement How well support you . . . . About us and our teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.
Posted 3 weeks ago
5.0 - 10.0 years
22 - 27 Lacs
Navi Mumbai
Work from Office
As Architect at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Architect, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise seach applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviours Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Strong understanding data lake approaches, industry standards and industry best practices. Detail level understanding of HADOOP Framework, Ecosystem, MapReduce, and Data on Containers (data in OpenShift). Applies individual experiences / competency and IBM architecting structure thinking model to analyzing client IT systems. Experience with relational SQL, Big Data etc Experienced with Cloud native platforms such as AWS, Azure, Google, IBM Cloud or Cloud Native data platforms like Snowflake Preferred technical and professional experience Knowledge of MS-Azure Cloud Experience in Unix shell scripting and python
Posted 3 weeks ago
5.0 - 7.0 years
13 - 17 Lacs
Bengaluru
Work from Office
A career in IBM Consulting is rooted by long-term relationships and close collaboration with clients across the globe. You'll work with visionaries across multiple industries to improve the hybrid cloud and Al journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio; including Software and Red Hat. In your role, you will be responsible for: Skilled Multiple GCP services - GCS, BigQuery, Cloud SQL, Dataflow, Pub/Sub, Cloud Run, Workflow, Composer, Error reporting, Log explorer etc. Must have Python and SQL work experience & Proactive, collaborative and ability to respond to critical situation Ability to analyse data for functional business requirements & front face customer Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 5 to 7 years of relevant experience working as technical analyst with Big Query on GCP platform. Skilled in multiple GCP services - GCS, Cloud SQL, Dataflow, Pub/Sub, Cloud Run, Workflow, Composer, Error reporting, Log explorer Ambitious individual who can work under their own direction towards agreed targets/goals and with creative approach to work You love collaborative environments that use agile methodologies to encourage creative design thinking and find innovative ways to develop with cutting edge technologies. End to End functional knowledge of the data pipeline/transformation implementation that the candidate has done, should understand the purpose/KPIs for which data transformation was done Preferred technical and professional experience Experience with AEM Core Technologies OSGI Services, Apache Sling ,Granite Framework., Java Content Repository API, Java 8+, Localization Familiarity with building tools, Jenkin and Maven , Knowledge of version control tools, especially Git, Knowledge of Patterns and Good Practices to design and develop quality and clean code, Knowledge of HTML, CSS, and JavaScript , jQuery Familiarity with task management, bug tracking, and collaboration tools like JIRA and Confluence
Posted 3 weeks ago
3.0 - 8.0 years
5 - 8 Lacs
Mumbai
Work from Office
Role Overview: Seeking an experienced Apache Airflow specialist to design and manage data orchestration pipelines for batch/streaming workflows in a Cloudera environment. Key Responsibilities: * Design, schedule, and monitor DAGs for ETL/ELT pipelines * Integrate Airflow with Cloudera services and external APIs * Implement retries, alerts, logging, and failure recovery * Collaborate with data engineers and DevOps teams Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Skills Required: * Experience3-8 years * Expertise in Airflow 2.x, Python, Bash * Knowledge of CI/CD for Airflow DAGs * Proven experience with Cloudera CDP, Spark/Hive-based data pipelines * Integration with Kafka, REST APIs, databases
Posted 3 weeks ago
3.0 - 7.0 years
10 - 14 Lacs
Pune
Work from Office
Developer leads the cloud application development/deployment. A developer responsibility is to lead the execution of a project by working with a senior level resource on assigned development/deployment activities and design, build, and maintain cloud environments focusing on uptime, access, control, and network security using automation and configuration management tools Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Strong proficiency in Java, Spring Framework, Spring boot, RESTful APIs, excellent understanding of OOP, Design Patterns. Strong knowledge of ORM tools like Hibernate or JPA, Java based Micro-services framework, Hands on experience on Spring boot Microservices, Primary Skills: - Core Java, Spring Boot, Java2/EE, Microservices - Hadoop Ecosystem (HBase, Hive, MapReduce, HDFS, Pig, Sqoop etc) - Spark Good to have Python. Strong knowledge of micro-service logging, monitoring, debugging and testing, In-depth knowledge of relational databases (e.g., MySQL) Experience in container platforms such as Docker and Kubernetes, experience in messaging platforms such as Kafka or IBM MQ, Good understanding of Test-Driven-Development Familiar with Ant, Maven or other build automation framework, good knowledge of base UNIX commands,Experience in Concurrent design and multi-threading Preferred technical and professional experience None
Posted 3 weeks ago
3.0 - 7.0 years
10 - 14 Lacs
Bengaluru
Work from Office
As a Software Developer you'll participate in many aspects of the software development lifecycle, such as design, code implementation, testing, and support. You will create software that enables your clients' hybrid-cloud and AI journeys. Your primary responsibilities include: Comprehensive Feature Development and Issue ResolutionWorking on the end to end feature development and solving challenges faced in the implementation. Stakeholder Collaboration and Issue ResolutionCollaborate with key stakeholders, internal and external, to understand the problems, issues with the product and features and solve the issues as per SLAs defined. Continuous Learning and Technology IntegrationBeing eager to learn new technologies and implementing the same in feature development Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Primary Skills: Core Java, Spring Boot, Java2/EE, Microsservices Hadoop Ecosystem (HBase, Hive, MapReduce, HDFS, Pig, Sqoop etc) Spark Good to have Python Preferred technical and professional experience None
Posted 3 weeks ago
2.0 - 3.0 years
4 - 8 Lacs
Bengaluru
Work from Office
Educational Bachelor of Engineering Service Line Data & Analytics Unit Responsibilities Bachelor’s degree or foreign equivalent required from an accredited institution. Will also consider three years of progressive experience in the specialty in lieu of every year of education At least 5 years of experience in Pyspark, Spark with Hadoop distributed frameworks while handling large amount of big data using Spark and Hadoop Ecosystems in Data Pipeline creation , deployment , Maintenance and debugging Experience in scheduling and monitoring Jobs and creating tools for automation At least 4 years of experience with Scala and Python required. Proficient knowledge of SQL with any RDBMS. Strong communication skills (verbal and written) with ability to communicate across teams, internal and external at all levels. Ability to work within deadlines and effectively prioritize and execute on tasks. Preferred QualificationsAt least 1 years of AWS development experience is preferred Experience in Drive automations DevOps Knowledge is an added advantage. Additional Responsibilities: Advanced conceptual understanding of at least one Programming Language Advanced conceptual understanding of one database and one Operating System Understanding of Software Engineering with practice in at least one project Ability to contribute in medium to complex tasks independently Exposure to Design Principles and ability to understand Design Specifications independently Ability to run Test Cases and scenarios as per the plan Ability to accept and respond to production issues and coordinate with stake holders Good understanding of SDLC Analytical abilities Logical thinking Awareness of latest technologies and trends Technical and Professional : Primary Skills Pyspark, Spark and proficient in SQL Secondary Skills Scala and Python Experience 3 + Yrs Preferred Skills: Bigdata-Spark Bigdata-Pyspark Bigdata-Python
Posted 3 weeks ago
8.0 - 13.0 years
10 - 14 Lacs
Bengaluru
Work from Office
Educational Bachelor of Engineering Service Line Strategic Technology Group Responsibilities Power Programmer is an important initiative within Global Delivery to develop a team of Full Stack Developers who will be working on complex engineering projects, platforms and marketplaces for our clients using emerging technologies., They will be ahead of the technology curve and will be constantly enabled and trained to be Polyglots., They are Go-Getters with a drive to solve end customer challenges and will spend most of their time in designing and coding, End to End contribution to technology oriented development projects., Providing solutions with minimum system requirements and in Agile Mode., Collaborate with Power Programmers., Open Source community and Tech User group., Custom Development of new Platforms & Solutions ,Opportunities., Work on Large Scale Digital Platforms and marketplaces., Work on Complex Engineering Projects using cloud native architecture ., Work with innovative Fortune 500 companies in cutting edge technologies., Co create and develop New Products and Platforms for our clients., Contribute to Open Source and continuously upskill in latest technology areas., Incubating tech user group Technical and Professional : Bigdata Spark, scala, hive, kafka Preferred Skills: Technology-Big Data-Hbase Technology-Big Data-Sqoop Technology-Java-Apache-Scala Technology-Functional Programming-Scala Technology-Big Data - Data Processing-Map Reduce Technology-Big Data - Data Processing-Spark
Posted 3 weeks ago
4.0 - 8.0 years
8 - 12 Lacs
Bengaluru
Work from Office
Educational Bachelor of Engineering Service Line Strategic Technology Group Responsibilities Power Programmer is an important initiative within Global Delivery to develop a team of Full Stack Developers who will be working on complex engineering projects, platforms and marketplaces for our clients using emerging technologies., They will be ahead of the technology curve and will be constantly enabled and trained to be Polyglots., They are Go-Getters with a drive to solve end customer challenges and will spend most of their time in designing and coding, End to End contribution to technology oriented development projects., Providing solutions with minimum system requirements and in Agile Mode., Collaborate with Power Programmers., Open Source community and Tech User group., Custom Development of new Platforms & Solutions ,Opportunities., Work on Large Scale Digital Platforms and marketplaces., Work on Complex Engineering Projects using cloud native architecture ., Work with innovative Fortune 500 companies in cutting edge technologies., Co create and develop New Products and Platforms for our clients., Contribute to Open Source and continuously upskill in latest technology areas., Incubating tech user group Technical and Professional : Python+ Lamda+ AWS+Pyspark Preferred Skills: Technology-Big Data - Data Processing-Spark Technology-Machine Learning-Python Technology-Infrastructure Security-Reverse Malware Engineering-PANDA
Posted 3 weeks ago
5.0 - 8.0 years
8 - 13 Lacs
Bengaluru
Work from Office
Educational Bachelor of Engineering Service Line Strategic Technology Group Responsibilities Power Programmer is an important initiative within Global Delivery to develop a team of Full Stack Developers who will be working on complex engineering projects, platforms and marketplaces for our clients using emerging technologies., They will be ahead of the technology curve and will be constantly enabled and trained to be Polyglots., They are Go-Getters with a drive to solve end customer challenges and will spend most of their time in designing and coding, End to End contribution to technology oriented development projects., Providing solutions with minimum system requirements and in Agile Mode., Collaborate with Power Programmers., Open Source community and Tech User group., Custom Development of new Platforms & Solutions ,Opportunities., Work on Large Scale Digital Platforms and marketplaces., Work on Complex Engineering Projects using cloud native architecture ., Work with innovative Fortune 500 companies in cutting edge technologies., Co create and develop New Products and Platforms for our clients., Contribute to Open Source and continuously upskill in latest technology areas., Incubating tech user group Technical and Professional : Bigdata Spark, scala, hive, kafka Preferred Skills: Technology-Big Data-Big Data - ALL Technology-Big Data - Hadoop-Hadoop Technology-Big Data-Hbase Technology-Big Data-Sqoop Technology-Java-Apache-Scala Technology-Functional Programming-Scala Technology-IOT Platform-Custom IOT Platform – Big Data Processing Analytics
Posted 3 weeks ago
3.0 - 5.0 years
4 - 8 Lacs
Bengaluru
Work from Office
Educational Bachelor of Engineering Service Line Data & Analytics Unit Responsibilities A day in the life of an Infoscion As part of the Infosys delivery team, your primary role would be to interface with the client for quality assurance, issue resolution and ensuring high customer satisfaction. You will understand requirements, create and review designs, validate the architecture and ensure high levels of service offerings to clients in the technology domain. You will participate in project estimation, provide inputs for solution delivery, conduct technical risk planning, perform code reviews and unit test plan reviews. You will lead and guide your teams towards developing optimized high quality code deliverables, continual knowledge management and adherence to the organizational guidelines and processes. You would be a key contributor to building efficient programs/ systems and if you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you!If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Technical and Professional : Technology-Functional Programming-ScalaTechnology-Java-Apache-Scala Preferred Skills: Technology-Java-Apache-Scala Technology-Functional Programming-Scala
Posted 3 weeks ago
0.0 - 1.0 years
8 - 10 Lacs
Hyderabad
Work from Office
Google Cloud Platform o GCS, DataProc, Big Query, Data Flow Programming Languages o Java, Scripting Languages like Python, Shell Script, SQL Google Cloud Platform o GCS, DataProc, Big Query, Data Flow 5+ years of experience in IT application delivery with proven experience in agile development methodologies 1 to 2 years of experience in Google Cloud Platform (GCS, DataProc, Big Query, Composer, Data Processing like Data Flow)
Posted 3 weeks ago
5.0 - 8.0 years
7 - 10 Lacs
Mumbai, Hyderabad, Chennai
Work from Office
Your Role Should have extensively worked on Metadata, Rules & Memberlists in HFM. VB Scripting knowledge is mandatory. Understand and communicate the consequences of changes made. Should have worked on Monthly/Quarterly/Yearly Validations. Should have worked on ICP accounts, Journals and Intercompany Reports. Should have worked on Data Forms & Data Grids. Should able to work on FDMEE Mappings. Should be fluent with FDMEE Knowledge. Should have worked on Financial Reporting Studio. Your profile Performing UAT with business on the CR's. Should have a to resolve business about their HFM queries(if any). Agile process knowledge will be an added advantage What youll love about working here You can shape yourwith us. We offer a range of career paths and internal opportunities within Capgemini group. You will also get personalized career guidance from our leaders. You will get comprehensive wellness benefits including health checks, telemedicine, insurance with top-ups, elder care, partner coverage or new parent support via flexible work. You will have theon one of the industry's largest digital learning platforms, with access to 250,000+ courses and numerous certifications. Were committed to ensure that people of all backgrounds feel encouraged and have a sense of belonging at Capgemini. You are valued for who you are, and you can. Every Monday, kick off the week with a musical performance by our in-house band - The Rubber Band. Also get to participate in internal, yoga challenges, or marathons. At Capgemini, you can work onin tech and engineering with industry leaders or createto overcome societal and environmental challenges. About Capgemini Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, generative AI, cloud and data, combined with its deep industry expertise and partner ecosystem. Location - Hyderabad,Chennai,Mumbai,Bengaluru
Posted 3 weeks ago
3.0 - 5.0 years
5 - 7 Lacs
Mumbai, New Delhi, Bengaluru
Work from Office
We are seeking a skilled Big Data Developer with 3+ years of experience to develop, maintain, and optimize large-scale data pipelines using frameworks like Spark, PySpark, and Airflow. The role involves working with SQL, Impala, Hive, and PL/SQL for advanced data transformations and analytics, designing scalable data storage systems, and integrating structured and unstructured data using tools like Sqoop. The ideal candidate will collaborate with cross-functional teams to implement data warehousing strategies and leverage BI tools for insights. Proficiency in Python programming, workflow orchestration with Airflow, and Unix/Linux environments is essential. Locations : Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Remote
Posted 3 weeks ago
1.0 - 2.0 years
3 - 6 Lacs
Dhule
Work from Office
Google Cloud Platform o GCS, DataProc, Big Query, Data Flow Programming Languages o Java, Scripting Languages like Python, Shell Script, SQL Google Cloud Platform o GCS, DataProc, Big Query, Data Flow 5+ years of experience in IT application delivery with proven experience in agile development methodologies 1 to 2 years of experience in Google Cloud Platform (GCS, DataProc, Big Query, Composer, Data Processing like Data Flow)
Posted 3 weeks ago
7.0 - 12.0 years
9 - 14 Lacs
Bengaluru
Work from Office
Location: Bangalore/Hyderabad/Pune Experience level: 7+ Years About the Role We are seeking a highly skilled Snowflake Developer to join our team in Bangalore. The ideal candidate will have extensive experience in designing, implementing, and managing Snowflake-based data solutions. This role involves developing data architectures and ensuring the effective use of Snowflake to drive business insights and innovation. Key Responsibilities: Design and implement scalable, efficient, and secure Snowflake solutions to meet business requirements. Develop data architecture frameworks, standards, and principles, including modeling, metadata, security, and reference data. Implement Snowflake-based data warehouses, data lakes, and data integration solutions. Manage data ingestion, transformation, and loading processes to ensure data quality and performance. Collaborate with business stakeholders and IT teams to develop data strategies and ensure alignment with business goals. Drive continuous improvement by leveraging the latest Snowflake features and industry trends. Qualifications: Bachelors or Masters degree in Computer Science, Information Technology, Data Science, or a related field. 8+ years of experience in data architecture, data engineering, or a related field. Extensive experience with Snowflake, including designing and implementing Snowflake-based solutions. Must have exposure working in Airflow Proven track record of contributing to data projects and working in complex environments. Familiarity with cloud platforms (e.g., AWS, GCP) and their data services. Snowflake certification (e.g., SnowPro Core, SnowPro Advanced) is a plus.
Posted 3 weeks ago
4.0 - 6.0 years
6 - 8 Lacs
Bengaluru
Work from Office
Role: Snowflake Developer with DBT Location: Bangalore/Hyderabad/Pune About the Role : We are seeking a Snowflake Developer with a deep understanding of DBT (data build tool) to help us design, build, and maintain scalable data pipelines. The ideal candidate will have hands-on experience working with Snowflake, DBT, and a passion for optimizing data processes for performance and efficiency. Responsibilities : Design, develop, and optimize Snowflake data models and DBT transformations. Build and maintain CI/CD pipelines for automated DBT workflows. Implement best practices for data pipeline performance, scalability, and efficiency in Snowflake. Contribute to the DBT community or develop internal tools/plugins to enhance the workflow. Troubleshoot and resolve complex data pipeline issues using DBT and Snowflake Qualifications : Must have minimum 4+ years of experience with Snowflake Must have at least 1 year of experience with DBT Extensive experience with DBT, including setting up CI/CD pipelines, optimizing performance, and contributing to the DBT community or plugins. Must be strong in SQL, data modelling, and ELT pipelines. Excellent problem-solving skills and the ability to collaborate effectively in a team environment.
Posted 3 weeks ago
1.0 - 2.0 years
3 - 5 Lacs
Ahmedabad
Work from Office
Google Cloud Platform o GCS, DataProc, Big Query, Data Flow Programming Languages o Java, Scripting Languages like Python, Shell Script, SQL Google Cloud Platform o GCS, DataProc, Big Query, Data Flow 5+ years of experience in IT application delivery with proven experience in agile development methodologies 1 to 2 years of experience in Google Cloud Platform (GCS, DataProc, Big Query, Composer, Data Processing like Data Flow)
Posted 3 weeks ago
4.0 - 9.0 years
10 - 12 Lacs
Bengaluru, Doddakannell, Karnataka
Work from Office
We are seeking a highly skilled Data Engineer with expertise in ETL techniques, programming, and big data technologies. The candidate will play a critical role in designing, developing, and maintaining robust data pipelines, ensuring data accuracy, consistency, and accessibility. This role involves collaboration with cross-functional teams to enrich and maintain a central data repository for advanced analytics and machine learning. The ideal candidate should have experience with cloud-based data platforms, data modeling, and data governance processes. Location - Bengaluru,Doddakannell, Karnataka, Sarjapur Road
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough