Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 9.0 years
9 - 18 Lacs
Bengaluru
Hybrid
Job Description 5+ yrs of IT experience Good understanding of analytics tools for effective analysis of data Should be able to lead teams Should have been part of the production deployment team, Production Support team Experience with Big Data tools- Hadoop, Spark, Apache Beam, Kafka, etc. Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc. Experience with any DW tools like BQ, Redshift, Synapse, or Snowflake Experience in ETL and Data Warehousing. Experience and firm understanding of relational and non-relational databases like MySQL, MS SQL Server, Postgres, MongoDB, Cassandra, etc. Experience with cloud platforms like AWS, GCP and Azure. Experience with workflow management using tools like Apache Airflow. Roles & Responsibilities Develop high-performance and scalable solutions using GCP that extract, transform, and load big data. Designing and building production-grade data solutions from ingestion to consumption using Java / Python Design and optimize data models on the GCP cloud using GCP data stores such as BigQuery Should be able to handle the deployment process Optimizing data pipelines for performance and cost for large-scale data lakes. Writing complex, highly optimized queries across large data sets and creating data processing layers. Closely interact with Data Engineers to identify the right tools to deliver product features by performing POC Collaborative team player that interacts with business, BAs, and other Data/ML engineers Research new use cases for existing data. Preferred: Need to be Aware of Design Best practices for OLTP and OLAP Systems Should be part of team designing the DB and pipeline Should have exposure to Load testing methodologies, Debugging pipelines and Delta load handling Worked on heterogeneous migration projects
Posted 3 weeks ago
3.0 - 8.0 years
9 - 13 Lacs
Hyderabad
Work from Office
About the job : - As a Mid Databricks Engineer, you will play a pivotal role in designing, implementing, and optimizing data processing pipelines and analytics solutions on the Databricks platform. - You will collaborate closely with cross-functional teams to understand business requirements, architect scalable solutions, and ensure the reliability and performance of our data infrastructure. - This role requires deep expertise in Databricks, strong programming skills, and a passion for solving complex engineering challenges. What You'll Do : - Design and develop data processing pipelines and analytics solutions using Databricks. - Architect scalable and efficient data models and storage solutions on the Databricks platform. - Collaborate with architects and other teams to migrate current solution to use Databricks. - Optimize performance and reliability of Databricks clusters and jobs to meet SLAs and business requirements. - Use best practices for data governance, security, and compliance on the Databricks platform. - Mentor junior engineers and provide technical guidance. - Stay current with emerging technologies and trends in data engineering and analytics to drive continuous improvement. You'll Be Expected To Have : - Bachelor's or Master's degree in Computer Science, Engineering, or a related field. - 3 to 6 years of overall experience and 2+ years of experience designing and implementing data solutions on the Databricks platform. - Proficiency in programming languages such as Python, Scala, or SQL. - Strong understanding of distributed computing principles and experience with big data technologies such as Apache Spark. - Experience with cloud platforms such as AWS, Azure, or GCP, and their associated data services. - Proven track record of delivering scalable and reliable data solutions in a fast-paced environment. - Excellent problem-solving skills and attention to detail. - Strong communication and collaboration skills with the ability to work effectively in cross-functional teams. - Good to have experience with containerization technologies such as Docker and Kubernetes. - Knowledge of DevOps practices for automated deployment and monitoring of data pipelines.
Posted 3 weeks ago
4.0 - 6.0 years
13 - 18 Lacs
Bengaluru
Remote
About BNI: Established in 1985, BNI is the world’s largest business referral network. With over 325,000 small-to medium-size business Members in over 11,000 Chapters across 77 Countries, we are a global company with local footprints. Our proven approach provides Members with a structured, positive, and professional referral program that enables them to sharpen their business skills, develop meaningful, long-term relationships, and experience business growth. Visit to learn how BNI has impacted the lives of our Members and how it can help you achieve your business goals. Position Summary The Database Developer will be a part of BNI’s Global Information Technology Team and will primarily have responsibilities over the creation, development, maintenance, and enhancements for our databases, queries, routines and processes. The Database Developer will work closely with the Database Administrator, data team, software developers, QA engineers and DevOps Engineers located within the BNI office in Bangalore, as well as all levels of BNI Management and Leadership teams. This is an unparalleled opportunity to become part of a growing team and a growing global organization. High performers will have significant growth opportunities available to them. The candidate should be able to be an expert in both database and query design and should be able to write queries on the fly on demand, he/she should posses good hands on experience on data engineering and should be well versed with tools mentioned in the technical table below The person should be able to own the assignments and should be independent in terms of the development of queries and other aspects in data engineering. Roles and Responsibilities Design stable, reliable and effective databases Create, optimize and maintain queries, used in our software applications, as well as data extracts and ETL processes Modify and maintain databases, routines, queries in order to ensure accuracy, maintainability, scalability, and high performance of all our data systems Solve database usage issues and malfunctions Liaise with developers to improve applications and establish best practices Provide data management support for our users/clients Research, analyze and recommend upgrades to our data systems Prepare documentation and specifications for all deployed queries/routines/processes Profile, optimize and tweak queries and routines for optimal performance Support the Development and Quality Assurance teams with their needs for database development and access Be a team player and strong problem-solver to work with a diverse team Qualifications Required: Bachelor’s Degree or equivalent work experience Fluent in English, with excellent oral and written communication skills 5+ years of experience with Linux-based MySQL/MariaDB database development and maintenance 2+ years of experience with Database Design/Development/Scripting Proficient in writing and optimizing SQL Statements Strong proficiency in MySQL/MariaDB scripting, including functions, routines and complex data queries. Understanding of MySQL/MariaDB’s underlying storage engines, such as InnoDB and MyISAM Knowledge of standards and best practices in MySQL/MariaDB Knowledge of MySQL/MariaDB features, such as its event scheduler (Desired) Familiarity with other SQL/NoSQL databases such as PostgreSQL, MongoDB, Redis Experience with Amazon Web Services’ RDS offering Experience with Data Lakes and Big Data is a must Experience in Python is a must Experience with tools like Airflow/DBT/Data pipelines Experience with Apache Superset Knowledgeable with AWS services from Data Engineering point of view (Desired) Proficient Understanding of git/GitHub as a source control system Familiarity with working on an Agile/Iterative development framework Self-starter with positive attitude with the ability to collaborate with product managers and developers Strong SQL Experience and ability to write queries on demand. Primary Technologies: Database Stored Procedure SQL Optimization Database Management Airflow/DBT/ Data Warehousing with RedShift/Snowflake (Mandatory). Python/Linux/ Data Pipelines Physical Demands and Working Conditions Sedentary work. Exerting up to 10 pounds of force occasionally and/or negligible amount of force frequently or constantly to lift, carry, push, pull or otherwise move objects. Repetitive motion. Substantial movements (motions) of the wrists, hands, and/or fingers. The worker is required to have close visual acuity to perform an activity such as: preparing and analyzing data and figures; transcribing; viewing a computer terminal; extensive reading. External Posting Language This is a full-time position. This job description is not designed to cover or contain a comprehensive listing of activities, duties or responsibilities that are required of the employee for this job. Duties, responsibilities, and activities may change at any time with or without notice. Learn more at BNI.com
Posted 3 weeks ago
1.0 - 3.0 years
3 - 5 Lacs
Bengaluru
Work from Office
As a Backend Engineer (Founding Engineer) at Zintlr, you will have the unique opportunity to build critical components of our cutting-edge AI product from the ground up. You'll play a key role in shaping the infrastructure to handle millions of data requests per hour, working closely with our Lead Architect and Product Owner to bring innovative features to life.This is not just another backend roleit's a chance to be at the forefront of creating a scalable, impactful product in the fast-growing 'people intelligence' space. Your contributions will directly influence the product's success and the experience of thousands of users globally. Requirements Strong expertise in Django and Python. Solid knowledge of SQL and NoSQL databases (e.g., MySQL, MongoDB). Practical experience deploying solutions on GCP or AWS. Strong computer science fundamentals and problem-solving skills. Hands-on experience in building backend applications or products. Passion for programming, with a proactive and organized approach to work. Responsibilities Build critical components of a scalable, high-performance AI system. Develop and maintain infrastructure to handle millions of data requests per hour. Write clean, efficient, and reliable code with minimal oversight. Collaborate with the Lead Architect and Product Owner to align engineering capabilities with product evolution.
Posted 3 weeks ago
6.0 - 11.0 years
25 - 30 Lacs
Mumbai, Mumbai Suburban, Mumbai (All Areas)
Work from Office
Experience in using SQL, PL/SQL or T-SQL with RDBMSs like Teradata, MS SQL Server, or Oracle in production environments. Experience with Python, ADF,Azure,Data Ricks. Experience working of Microsoft Azure/AWS or other leading cloud platforms Required Candidate profile Hands-on experience with Hadoop, Spark, Hive, or similar frameworks. Data Integration & ETL Data Modelling Database management Data warehousing Big-data framework CI/CD Perks and benefits To be disclosed post interview
Posted 3 weeks ago
10.0 - 14.0 years
0 Lacs
karnataka
On-site
As a Data Engineer (ETL, Big Data, Hadoop, Spark, GCP) at Assistant Vice President level, located in Pune, India, you will be responsible for developing and delivering engineering solutions to achieve business objectives. You are expected to have a strong understanding of crucial engineering principles within the bank, and be skilled in root cause analysis through addressing enhancements and fixes in product reliability and resiliency. Working independently on medium to large projects with strict deadlines, you will collaborate in a cross-application technical environment, demonstrating a solid hands-on development track record within an agile methodology. Furthermore, this role involves collaborating with a globally dispersed team and is integral to the development of the Compliance tech internal team in India, delivering enhancements in compliance tech capabilities to meet regulatory commitments. Your key responsibilities will include analyzing data sets, designing and coding stable and scalable data ingestion workflows, integrating them with existing workflows, and developing analytics algorithms on ingested data. You will also be working on data sourcing in Hadoop and GCP, owning unit testing, UAT deployment, end-user sign-off, and production go-live. Root cause analysis skills will be essential for identifying bugs and issues, and supporting production support and release management teams. You will operate in an agile scrum team and ensure that new code is thoroughly tested at both unit and system levels. To excel in this role, you should have over 10 years of coding experience with reputable organizations, hands-on experience in Bitbucket and CI/CD pipelines, and proficiency in Hadoop, Python, Spark, SQL, Unix, and Hive. A basic understanding of on-prem and GCP data security, as well as hands-on development experience with large ETL/big data systems (with GCP experience being a plus), are required. Familiarity with cloud services such as cloud build, artifact registry, cloud DNS, and cloud load balancing, along with data flow, cloud composer, cloud storage, and data proc, is essential. Additionally, knowledge of data quality dimensions and data visualization is beneficial. You will receive comprehensive support, including training and development opportunities, coaching from experts in your team, and a culture of continuous learning to facilitate your career progression. The company fosters a collaborative and inclusive work environment, empowering employees to excel together every day. As part of Deutsche Bank Group, we encourage applications from all individuals and promote a positive and fair workplace culture. For further details about our company and teams, please visit our website: https://www.db.com/company/company.htm.,
Posted 3 weeks ago
7.0 - 12.0 years
15 - 22 Lacs
Pune, Chennai, Bengaluru
Work from Office
Primary: Azure, Databricks, ADF, Pyspark/Python Secondary: Datawarehouse, SAS/Alteryx Must Have • 8+ Years of IT experience in Datawarehouse and ETL • Hands-on data experience on Cloud Technologies on Azure, ADF, Synapse, Pyspark/Python • Ability to understand Design, Source to target mapping (STTM) and create specifications documents • Flexibility to operate from client office locations • Able to mentor and guide junior resources, as needed Nice to Have • Any relevant certifications • Banking experience on RISK & Regulatory OR Commercial OR Credit Cards/Retail
Posted 3 weeks ago
6.0 - 8.0 years
6 - 10 Lacs
Hyderabad, Chennai
Work from Office
Job Title: Lead Work Location: Chennai, TN/Hyderabad, TS Skill Required: BigData and Hadoop Ecosystems, Unix / Linux Basics and Commands Experience Range in Required Skills: 6-8 Years
Posted 3 weeks ago
0.0 - 4.0 years
18 - 19 Lacs
Gurugram
Work from Office
Here, your voice and ideas matter, your work makes an impact, and together, you will help us define the future of American Express. How will you make an impact in this role? Responsible for delivering best in class Data Products to develop & implement Digital Marketing Programs in this ever-changing Digital targeting landscape. Creating world class data products powered by Data & Dynamic Intelligence Drive Audience Identification and Custom targeting leveraging Amex and Partner data assets Creating data segmentation & optimization strategies for targeting profitable prospects with the right product/incentive across multiple digital channels Develop and foster cross functional relationships with partners across American Express Minimum Qualifications Masters Degree or equivalent experience in a quantitative field (e.g. Finance, Engineering, Physics, Mathematics, Computer Science or Economics) Strong programming skills Strong analytical/conceptual thinking acumen to solve unstructured and complex business problems Excellent written, oral communication skills Preferred Qualifications Digital analytics experience and familiarity with paid marketing channels, digital marketing technology Experience with BIG DATA PROGRAMMING LANGUAGES (HIVE, PIG), PYTHON, JAVA : Competitive base salaries Bonus incentives Support for financial-well-being and retirement Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location) Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need Generous paid parental leave policies (depending on your location) Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) Free and confidential counseling support through our Healthy Minds program Career development and training opportunities Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations
Posted 3 weeks ago
5.0 - 10.0 years
25 - 35 Lacs
Hyderabad
Hybrid
Job Title: Big Data Engineer Experience: 59 Years Location: Hyderabad-Hybrid Employment Type: Full-Time Job Summary: We are seeking a skilled Big Data Engineer with 5–9 years of experience in building and managing scalable data pipelines and analytics solutions. The ideal candidate will have strong expertise in Big Data, Hadoop, Apache Spark, SQL, Hadoop, and Data Lake/Data Warehouse architectures. Experience working with any cloud platform (AWS, Azure, or GCP) is preferred. Required Skills: 5–9 years of hands-on experience as a Big Data Engineer. Strong proficiency in Apache Spark (PySpark or Scala). Solid understanding and experience with SQL and database optimization. Experience with data lake or data warehouse environments and architecture patterns. Good understanding of data modeling, performance tuning, and partitioning strategies. Experience in working with large-scale distributed systems and batch/stream data processing. Preferred Qualifications: Experience with cloud platforms such as AWS, Azure, or GCP is preferred. Education: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
Posted 3 weeks ago
7.0 - 9.0 years
11 - 15 Lacs
Hyderabad
Work from Office
Responsibilities: Design and implement Cloudera-based data platforms, including cluster sizing, configuration, and optimization. Install, configure, and administer Cloudera Manager and CDP clusters, managing all aspects of the cluster lifecycle. Monitor and troubleshoot platform performance, identifying and resolving issues in a timely manner. Review the maintain the data ingestion and processing pipelines on the Cloudera platform. Collaborate with data engineers and data scientists to design and optimize data models, ensuring efficient data storage and retrieval. Implement and enforce security measures for the Cloudera platform, including authentication, authorization, and encryption. Manage platform user access and permissions, ensuring compliance with data privacy regulations and internal policies. Experience in creating Technology Road Maps for Cloudera Platform. Stay up-to-date with the latest Cloudera and big data technologies, and recommend and implement relevant updates and enhancements to the platform. Experience in Planning, testing, and executing upgrades involving Cloudera components and ensuring platform stability and security. Document platform configurations, processes, and procedures, and provide training and support to other team members as needed. Requirements: Proven experience as a Cloudera platform engineer or similar role, with a strong understanding of Cloudera Manager and CDH clusters. Expertise in designing, implementing, and maintaining scalable and high-performance data platforms using Cloudera technologies such as Hadoop, Spark, Hive, Kafka. Strong knowledge of big data concepts and technologies, data modeling, and data warehousing principles. Familiarity with data security and compliance requirements, and experience implementing security measures for Cloudera platforms. Proficiency in Linux system administration and scripting languages (e.g., Shell, Python). Strong troubleshooting and problem-solving skills, with the ability to diagnose and resolve platform issues quickly. Excellent communication and collaboration skills, with the ability to work effectively in cross-functional teams. Experience on Azure Data Factory/Azure Databricks/Azure Synapse is a plus
Posted 3 weeks ago
10.0 - 14.0 years
35 - 40 Lacs
Hyderabad
Work from Office
Skills: Cloudera, Big Data, Hadoop, SPARK, Kafka, Hive, CDH Clusters Design and implement Cloudera-based dataplatforms, including cluster sizing, configuration, and optimization. Install, configure, and administer Cloudera Manager and CDP clusters, managingall aspects of the cluster lifecycle. Monitor and troubleshoot platform performance, identifying and resolving issuespromptly. Review the maintain the data ingestion and processing pipelines on the Clouderaplatform. Collaborate with data engineers and datascientists to design and optimize data models, ensuring efficient data storageand retrieval. Implement and enforce security measures for the Cloudera platform, including authentication, authorization, and encryption. Manage platform user access and permissions, ensuring compliance with dataprivacy regulations and internal policies. Experience in creating Technology Road Mapsfor Cloudera Platform. Stay up-to-date with the latest Cloudera and big datatechnologies, and recommend and implement relevant updates and enhancements tothe platform. Experience in Planning, testing, andexecuting upgrades involving Cloudera components and ensuring platformstability and security. Document platformconfigurations, processes, and procedures, and provide training and support toother team members as needed. Requirements Bachelor's degree in Computer Science, Engineering, or a related field. Proven experience as a Cloudera platform engineer or similar role, with astrong understanding of Cloudera Manager and CDH clusters. Expertise in designing, implementing, and maintaining scalable andhigh-performance data platforms using Cloudera technologies such as Hadoop, Spark, Hive, Kafka. Strong knowledge of big data concepts and technologies, data modeling, and data warehousing principles. Familiarity with data security and compliance requirements, and experience implementing security measures for Cloudera platforms. Proficiency in Linux system administration and scripting languages (e.g.,Shell, Python). Strong troubleshooting and problem-solving skills, with the ability to diagnoseand resolve platform issues quickly. Excellent communication and collaboration skills, with the ability to workeffectively in cross-functional teams. Experience on Azure Data Factory/Azure Databricks/ Azure Synapse is a plus.
Posted 3 weeks ago
15.0 - 20.0 years
4 - 8 Lacs
Hyderabad
Work from Office
About The Role Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 2 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to effectively migrate and deploy data across various systems, contributing to the overall efficiency and reliability of data management within the organization. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Collaborate with cross-functional teams to understand data requirements and deliver effective solutions.- Monitor and optimize data pipelines for performance and reliability. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Strong understanding of data modeling and database design principles.- Experience with ETL tools and processes.- Familiarity with cloud platforms and data storage solutions.- Knowledge of programming languages such as Python or Scala. Additional Information:- The candidate should have minimum 2 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 3 weeks ago
7.0 - 11.0 years
10 - 15 Lacs
Bengaluru
Work from Office
About The Role Skill required: Tech for Operations - Artificial Intelligence (AI) Designation: AI/ML Computational Science Specialist Qualifications: Any Graduation/Post Graduate Diploma in Management Years of Experience: 7 to 11 years Language - Ability: English(Domestic) - Advanced About Accenture Combining unmatched experience and specialized skills across more than 40 industries, we offer Strategy and Consulting, Technology and Operations services, and Accenture Song all powered by the worlds largest network of Advanced Technology and Intelligent Operations centers. Our 699,000 people deliver on the promise of technology and human ingenuity every day, serving clients in more than 120 countries. Visit us at www.accenture.com What would you do You will be part of the Technology for Operations team that acts as a trusted advisor and partner to Accenture Operations. The team provides innovative and secure technologies to help clients build an intelligent operating model, driving exceptional results. We work closely with the sales, offering and delivery teams to identify and build innovative solutions.The Tech For Operations (TFO) team provides innovative and secure technologies to help clients build an intelligent operating model, driving exceptional results. Works closely with the sales, offering and delivery teams to identify and build innovative solutions. Major sub deals include AHO(Application Hosting Operations), ISMT (Infrastructure Management), Intelligent AutomationUnderstanding of foundational principles and knowledge of Artificial Intelligence AI including concepts, techniques, and tools in order to use AI effectively. What are we looking for Machine LearningMachine Learning AlgorithmsMicrosoft Azure Machine LearningPython (Programming Language)Python Software DevelopmentAbility to work well in a teamWritten and verbal communicationNumerical abilityResults orientation1:AI Research ScientistsDeep Expertise in Machine Learning & AI TheoryAlgorithm Design & Theoretical InnovationData Proficiency & Synthetic Data GenerationResponsible AI & Ethical Awareness2:ML Research Engineers Mathematical & Statistical FoundationsProgramming & Software EngineeringModel Development & ExperimentationMLOps & Deployment Roles and Responsibilities: In this role you are required to do analysis and solving of moderately complex problems May create new solutions, leveraging and, where needed, adapting existing methods and procedures The person would require understanding of the strategic direction set by senior management as it relates to team goals Primary upward interaction is with direct supervisor May interact with peers and/or management levels at a client and/or within Accenture Guidance would be provided when determining methods and procedures on new assignments Decisions made by you will often impact the team in which they reside Individual would manage small teams and/or work efforts (if in an individual contributor role) at a client or within Accenture Please note that this role may require you to work in rotational shifts Qualification Any Graduation,Post Graduate Diploma in Management
Posted 3 weeks ago
15.0 - 20.0 years
10 - 14 Lacs
Bengaluru
Work from Office
About The Role Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : PySpark Good to have skills : Python (Programming Language)Minimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure project milestones are met, facilitating discussions to address challenges, and guiding your team through the development process while ensuring alignment with organizational goals. You will also engage in strategic planning and decision-making to enhance application performance and user experience, fostering a culture of innovation and continuous improvement within your team. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior professionals to enhance their skills and knowledge.- Facilitate workshops and meetings to drive project objectives and gather feedback. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark.- Good To Have Skills: Experience with Python (Programming Language).- Strong understanding of data processing frameworks and distributed computing.- Experience with cloud platforms and services related to application development.- Familiarity with Agile methodologies and project management tools.- Ability to troubleshoot and optimize application performance. Additional Information:- The candidate should have minimum 5 years of experience in PySpark.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 3 weeks ago
15.0 - 20.0 years
5 - 9 Lacs
Bengaluru
Work from Office
About The Role Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Data Engineering Good to have skills : Python (Programming Language), Amazon Web Services (AWS)Minimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with various teams to understand their needs, developing solutions that align with business objectives, and ensuring that applications are optimized for performance and usability. You will also engage in problem-solving activities, providing support and enhancements to existing applications while keeping abreast of the latest technologies and methodologies in application development. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge.- Continuously evaluate and improve application performance and user experience. Professional & Technical Skills: - Must To Have Skills: Proficiency in Data Engineering.- Good To Have Skills: Experience with Python (Programming Language), Amazon Web Services (AWS).- Strong understanding of data modeling and database design principles.- Experience with ETL processes and data pipeline development.- Familiarity with big data technologies and frameworks. Additional Information:- The candidate should have minimum 5 years of experience in Data Engineering.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 3 weeks ago
7.0 - 12.0 years
20 - 35 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Greetings from Grid Dynamics! We are currently looking for Lead Full Stack Developer". Please find the JD below for your reference. Job Description: Details on tech stack Design and implement intuitive and responsive user interfaces using React.js or similar front-end technologies. Collaborate with stakeholders to create a seamless user experience. Create mockups and UI prototypes for quick turnaround using Figma, Canva, or similar tools. Strong proficiency in HTML, CSS, JavaScript, and React.js. Experience with styling and graph libraries such as Highcharts, Material UI, and Tailwind CSS. Solid understanding of React fundamentals, including Routing, Virtual DOM, and Higher-Order Components (HOC). Knowledge of REST API integration. Understanding of Node.js is a big advantage. Experience with REST API development, preferably using FastAPI. Proficiency in programming languages like Python, Java. Integrate APIs and services between front-end and back-end systems. Experience with Docker and containerized applications. Experience with orchestration tools such as Apache Airflow or similar. Design, develop, and manage simple data pipelines using Databricks, PySpark, and Google BigQuery. Medium-level expertise in SQL. Basic understanding of authentication methods such as JWT and OAuth. Experience working with cloud platforms such as AWS, GCP, or Azure. Familiarity with Google BigQuery and Google APIs. Hands-on experience with Kubernetes for container orchestration.
Posted 3 weeks ago
15.0 - 20.0 years
9 - 14 Lacs
Bengaluru
Work from Office
About The Role Project Role : AI / ML Engineer Project Role Description : Develops applications and systems that utilize AI tools, Cloud AI services, with proper cloud or on-prem application pipeline with production ready quality. Be able to apply GenAI models as part of the solution. Could also include but not limited to deep learning, neural networks, chatbots, image processing. Must have skills : Machine Learning Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an AI / ML Engineer, you will engage in the development of applications and systems that leverage artificial intelligence tools and cloud AI services. Your typical day will involve designing and implementing production-ready solutions, ensuring that they meet quality standards. You will work on various projects that may include deep learning, neural networks, chatbots, and image processing, contributing to innovative solutions that enhance operational efficiency and user experience. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge.- Continuously evaluate and improve existing processes and systems to optimize performance. Professional & Technical Skills: - Must To Have Skills: Proficiency in Machine Learning.- Strong understanding of various machine learning algorithms and their applications.- Experience with cloud-based AI services and deployment strategies.- Familiarity with programming languages such as Python or R for data analysis.- Knowledge of data preprocessing techniques and data pipeline development. Additional Information:- The candidate should have minimum 5 years of experience in Machine Learning.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 3 weeks ago
3.0 - 8.0 years
4 - 8 Lacs
Bengaluru
Work from Office
About The Role Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : GCP Dataflow Good to have skills : Google BigQueryMinimum 3 year(s) of experience is required Educational Qualification : Any Graduate Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to effectively migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and optimize data workflows, ensuring that the data infrastructure supports the organization's analytical needs and business objectives. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Develop and maintain robust data pipelines to support data processing and analytics.- Monitor and troubleshoot data workflows to ensure optimal performance and reliability. Professional & Technical Skills: - Must To Have Skills: Proficiency in GCP Dataflow.- Good To Have Skills: Experience with Google BigQuery.- Strong understanding of data modeling and database design principles.- Experience with ETL tools and data integration techniques.- Familiarity with cloud computing concepts and services. Additional Information:- The candidate should have minimum 3 years of experience in GCP Dataflow.- This position is based at our Bengaluru office.- Any Graduate is required. Qualification Any Graduate
Posted 3 weeks ago
15.0 - 20.0 years
10 - 14 Lacs
Bengaluru
Work from Office
About The Role Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : PySpark Good to have skills : Python (Programming Language)Minimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure project milestones are met, facilitating discussions to address challenges, and guiding your team through the development process while maintaining a focus on quality and efficiency. You will also engage in strategic planning to align application development with organizational goals, ensuring that all stakeholders are informed and involved throughout the project lifecycle. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate knowledge sharing sessions to enhance team capabilities.- Monitor project progress and implement necessary adjustments to meet deadlines. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark.- Good To Have Skills: Experience with Python (Programming Language).- Strong understanding of data processing frameworks and distributed computing.- Experience in application design and architecture.- Familiarity with cloud platforms and deployment strategies. Additional Information:- The candidate should have minimum 5 years of experience in PySpark.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 3 weeks ago
15.0 - 20.0 years
5 - 9 Lacs
Gurugram
Work from Office
About The Role Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Apache Spark Good to have skills : MySQL, Python (Programming Language), Google BigQueryMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with various teams to understand their needs, developing innovative solutions, and ensuring that applications are optimized for performance and usability. You will engage in problem-solving activities, participate in team meetings, and contribute to the overall success of projects by delivering high-quality applications that align with business objectives. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge.- Continuously evaluate and improve application performance and user experience. Professional & Technical Skills: - Must To Have Skills: Proficiency in Apache Spark.- Good To Have Skills: Experience with Python (Programming Language), MySQL, Google BigQuery.- Strong understanding of distributed computing principles and frameworks.- Experience in developing data processing pipelines and ETL processes.- Familiarity with cloud platforms and services related to big data processing. Additional Information:- The candidate should have minimum 7.5 years of experience in Apache Spark.- This position is based at our Gurugram office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 3 weeks ago
15.0 - 20.0 years
10 - 14 Lacs
Hyderabad
Work from Office
About The Role Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : PySpark Good to have skills : Python (Programming Language)Minimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure project milestones are met, facilitating discussions to address challenges, and guiding your team in implementing effective solutions. You will also engage in strategic planning to align application development with organizational goals, ensuring that all stakeholders are informed and involved throughout the process. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate training and development opportunities for team members to enhance their skills.- Monitor project progress and adjust plans as necessary to meet deadlines. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark.- Good To Have Skills: Experience with Python (Programming Language).- Strong understanding of data processing frameworks and distributed computing.- Experience with application design and architecture principles.- Familiarity with cloud platforms and services for application deployment.- Ability to troubleshoot and optimize application performance. Additional Information:- The candidate should have minimum 5 years of experience in PySpark.- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 3 weeks ago
15.0 - 20.0 years
4 - 8 Lacs
Bengaluru
Work from Office
About The Role Project Role : Software Development Engineer Project Role Description : Analyze, design, code and test multiple components of application code across one or more clients. Perform maintenance, enhancements and/or development work. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : Python (Programming Language), Spark AR StudioMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Software Development Engineer, you will engage in a dynamic work environment where you will analyze, design, code, and test various components of application code for multiple clients. Your typical day will involve collaborating with team members to ensure the successful execution of projects, performing maintenance and enhancements, and contributing to the development of innovative solutions that meet client needs. You will be responsible for delivering high-quality code while adhering to best practices and standards in software development. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge.- Continuously evaluate and improve development processes to increase efficiency. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Good To Have Skills: Experience with Python (Programming Language), Spark AR Studio.- Strong understanding of data analytics and data engineering principles.- Experience with cloud-based data solutions and architectures.- Familiarity with agile development methodologies.-Must Have Databricks , Python, Spark , ADF Additional Information:- The candidate should have minimum 5 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 3 weeks ago
5.0 - 10.0 years
4 - 8 Lacs
Hyderabad
Work from Office
About The Role Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Talend ETL Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL processes to migrate and deploy data across systems. Be involved in the end-to-end data management process. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Lead the design and implementation of data solutions.- Optimize and troubleshoot ETL processes.- Conduct data analysis and provide insights for decision-making. Professional & Technical Skills: - Must To Have Skills: Proficiency in Talend ETL.- Strong understanding of data modeling and database design.- Experience with data integration and data warehousing concepts.- Hands-on experience with SQL and scripting languages.- Knowledge of cloud platforms and big data technologies. Additional Information:- The candidate should have a minimum of 5 years of experience in Talend ETL.- This position is based at our Hyderabad office.- A 15 years full-time education is required. Qualification 15 years full time education
Posted 3 weeks ago
15.0 - 20.0 years
4 - 8 Lacs
Bengaluru
Work from Office
About The Role Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Snowflake Data Warehouse Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver effective solutions that meet business needs. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge.- Continuously evaluate and improve data processes to optimize performance. Professional & Technical Skills: - Must To Have Skills: Proficiency in Snowflake Data Warehouse.- Good To Have Skills: Experience with data modeling and database design.- Strong understanding of ETL processes and data integration techniques.- Familiarity with cloud data warehousing solutions.- Experience in performance tuning and optimization of data queries. Additional Information:- The candidate should have minimum 5 years of experience in Snowflake Data Warehouse.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15459 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France