Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 - 10.0 years
25 - 40 Lacs
Pune
Hybrid
Key Skills: Solution Architecture, Google Cloud, Python, Java, Big Data, Apache Spark, Agile Delivery, Scrum, Kanban, CI/CD, DevOps, Cloud Solutions, Software Development, Testing, Production Support. Roles & Responsibilities: Own and manage end-to-end technical deliveries of products within your agile team. Lead technical deliveries for the agile team and the product. Manage activities related to design and development (CTB) as well as production processing support (RTB). Provide support across the full delivery lifecycle, including software development, testing, and operational support, adapting to demand. Create robust technical designs and development strategies for new components to meet requirements. Develop test plans, including unit and integration tests within automated test environments to ensure code quality. Collaborate with Ops, Dev, and Test Engineers to identify and address operational issues (e.g., performance, operator intervention, alerting, design defects). Ensure service resilience, sustainability, and recovery time objectives are met for all software solutions. Actively drive mandatory exercises related to resilience, recovery, and service management. Ensure compliance with end-to-end controls for products and data, including effective risk and control management (non-financial risks, compliance, and conduct responsibilities). Adhere to standard processes and ensure compliance with relevant regulations and policies. Experience Requirements: 8-10 years of experience in track record of designing and developing complex products, both on cloud and on-premise, including solution architecture, design, build, testing, and production. Experience in designing and implementing scalable solutions on Google Cloud. Proficiency in Python or any mainstream programming language such as Java. Good understanding of Big Data technologies such as Apache Spark and related technologies. Experience with Agile delivery methodologies (e.g., Scrum, Kanban). Participate in continuous improvement and transformation towards Agile, DevOps, CI/CD, and improving productivity. Excellent communication and interpersonal skills, demonstrating teamwork and collaboration. Education: B.Tech M.Tech (Dual), B.Tech, M. Tech.
Posted 1 week ago
8.0 - 12.0 years
0 Lacs
karnataka
On-site
As an AI Ops Expert, you will be responsible for the delivery of projects with defined quality standards within set timelines and budget constraints. Your role will involve managing the AI model lifecycle, versioning, and monitoring in production environments. You will be tasked with building resilient MLOps pipelines and ensuring adherence to governance standards. Additionally, you will design, implement, and oversee AIops solutions to automate and optimize AI/ML workflows. Collaboration with data scientists, engineers, and stakeholders will be essential to ensure seamless integration of AI/ML models into production systems. Monitoring and maintaining the health and performance of AI/ML systems, as well as developing and maintaining CI/CD pipelines for AI/ML models, will also be part of your responsibilities. Troubleshooting and resolving issues related to AI/ML infrastructure and workflows will require your expertise, along with staying updated on the latest AI Ops, MLOps, and Kubernetes tools and technologies. To be successful in this role, you must possess a Bachelor's or Master's degree in Computer Science, Software Engineering, or a related field, along with at least 8 years of relevant experience. Your proven experience in AIops, MLOps, or related fields will be crucial. Proficiency in Python and hands-on experience with Fast API are required, as well as strong expertise in Docker and Kubernetes (or AKS). Familiarity with MS Azure and its AI/ML services, including Azure ML Flow, is essential. Additionally, you should be proficient in using DevContainer for development and have knowledge of CI/CD tools like Jenkins, Argo CD, Helm, GitHub Actions, or Azure DevOps. Experience with containerization and orchestration tools, Infrastructure as Code (Terraform or equivalent), strong problem-solving skills, and excellent communication and collaboration abilities are also necessary. Preferred skills for this role include experience with machine learning frameworks such as TensorFlow, PyTorch, or scikit-learn, as well as familiarity with data engineering tools like Apache Kafka, Apache Spark, or similar. Knowledge of monitoring and logging tools such as Prometheus, Grafana, or ELK stack, along with an understanding of data versioning tools like DVC or MLflow, would be advantageous. Proficiency in Azure-specific tools and services like Azure Machine Learning (Azure ML), Azure DevOps, Azure Kubernetes Service (AKS), Azure Functions, Azure Logic Apps, Azure Data Factory, Azure Monitor, and Application Insights is also preferred. Joining our team at Socit Gnrale will provide you with the opportunity to be part of a dynamic environment where your contributions can make a positive impact on the future. You will have the chance to innovate, collaborate, and grow in a supportive and stimulating setting. Our commitment to diversity and inclusion, as well as our focus on ESG principles and responsible practices, ensures that you will have the opportunity to contribute meaningfully to various initiatives and projects aimed at creating a better future for all. If you are looking to be directly involved, develop your expertise, and be part of a team that values collaboration and innovation, you will find a welcoming and fulfilling environment with us at Socit Gnrale.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
At Citi, we are dedicated to building the future of banking through our cutting-edge technology and global presence. As a part of our team, you will have access to resources that cater to your unique needs, support your well-being, and empower you to plan for your future. We offer programs and services for physical and mental wellness, financial planning support, and continuous learning and development opportunities to enhance your skills and knowledge as you progress in your career. As an Officer, Java Full Stack Developer-C11, based in Pune/Chennai, India, you will play a crucial role in our development team by coding, testing, documenting, and releasing stories. Your responsibilities will include reviewing code for accuracy, collaborating with cross-functional teams to deliver high-quality software, identifying vulnerabilities in applications, and mentoring junior analysts. It is essential to have 5-9 years of experience as a Java Full Stack Developer, with expertise in Java, Springboot, Hibernate, Oracle/SQL, Restful Microservices, and Multithreading. Experience with Apache Spark/Apache Kafka/Redis cache, Maven, Github, Jira, Agile processes, and AI tools is preferred, along with exposure to Angular UI, App Dynamics, Splunk, NoSQL DB, and GraphQL. Working at Citi goes beyond just a job; it is about being part of a global family of dedicated professionals. Joining Citi means embracing career growth opportunities, contributing to your community, and making a meaningful impact. If you are ready to take the next step in your career, we invite you to apply for this role at Citi today. For more information and to apply, please visit: [Citi Careers](https://jobs.citi.com/dei),
Posted 1 week ago
7.0 - 11.0 years
0 Lacs
hyderabad, telangana
On-site
This position is for Ultimo Software solutions Pvt Ltd (Ultimosoft.com). You will be working as a Java/Scala Developer with the following responsibilities and requirements: - Advanced proficiency in one or more programming languages such as Java and Scala, along with database skills. - Hands-on experience as a Scala/Spark developer. - Self-rated Scala proficiency should be a minimum of 8 out of 10. - Proficiency in automation and continuous delivery methods, along with a deep understanding of the Software Development Life Cycle. - Strong knowledge of agile methodologies like CI/CD, Application Resiliency, and Security. - Demonstrated expertise in software applications and technical processes in areas like cloud, artificial intelligence, machine learning, or mobile development. - Experience with Java Spring Boot, Data Bricks, and a minimum of 7+ years of professional software engineering experience. - Proven skills in Java, J2EE, Spring Boot, JPA, Axon, and Kafka. - Familiarity with Maven and Gradle build tools, as well as the Kafka ecosystem, including Kafka Streams library and Kafka Avro schemas. - Providing end-to-end support for complex enterprise applications. - Strong problem-solving, analytical, and communication skills. - Work experience in Agile environments with a continuous delivery mindset. - Understanding of microservices architecture and distributed system design patterns. - Knowledge of CI/CD pipelines, DevOps practices, Redis cache, Redis Insight tool, Grafana, Grafana Loki logging, Prometheus monitoring, Jenkins, ArgoCD, and Kubernetes.,
Posted 1 week ago
10.0 - 14.0 years
0 Lacs
haryana
On-site
The role is seeking a highly experienced Big Data Engineer with a strong background in Python, Java, or Scala. You will be responsible for designing, building, and maintaining scalable data pipelines and data lake architectures. The ideal candidate should have a proven ability to manage and deliver on complex data engineering projects. This position offers an excellent opportunity to work on scalable data solutions in a collaborative environment. Key responsibilities include developing robust data engineering solutions, working with various technologies such as Apache Spark, Hadoop, Hive, HBase, Kafka, Airflow, and Oozie. Collaboration with cross-functional teams is essential to ensure high data quality and governance. You will also be expected to leverage cloud platforms like AWS, Azure, or GCP for data processing and infrastructure, as well as manage and optimize data warehouses, data lakes, and RDBMS like PostgreSQL or SQL Server. Required skills for this position include 10+ years of experience in Information Technology, particularly in the Big Data ecosystem. Strong programming skills in Python, Java, or Scala are a must, along with deep knowledge and hands-on experience with Apache Spark, Hadoop, Hive, HBase, Kafka, Airflow, and Oozie. Experience with cloud environments (AWS, Azure, or GCP) is highly desirable, as well as a good understanding of data modeling, data architecture, and data governance principles. Familiarity with version control (Git), Docker, and orchestration tools like Kubernetes is considered a plus. Preferred qualifications include experience in real-time data processing using Kafka or Spark Streaming, exposure to NoSQL databases such as MongoDB or Cassandra, and certification in AWS Big Data or equivalent, which is a strong advantage. This position is full-time and open only to women candidates.,
Posted 1 week ago
6.0 - 10.0 years
0 Lacs
karnataka
On-site
As an experienced Big Data Engineer with 6 to 8 years of experience, you will be responsible for designing, developing, and maintaining robust and scalable data processing applications using Apache Spark with Scala. Your key responsibilities include implementing and managing complex data workflows, conducting performance optimization and memory tuning for Spark jobs, and working proficiently with Apache Hive for data warehousing and HDFS for distributed storage within the Hadoop ecosystem. You will utilize your strong hands-on experience with Apache Spark using Scala, Hive, and HDFS, along with proficiency in Oozie workflows, ScalaTest, and Spark performance tuning. Your deep understanding of Spark UI, YARN logs, and debugging distributed jobs will be essential in diagnosing and resolving complex issues in distributed data systems to ensure data accuracy and pipeline efficiency. In addition, you will demonstrate expertise in writing comprehensive unit tests using ScalaTest and following best practices for building scalable and reliable data pipelines. Your familiarity with Agile/Scrum methodologies will enable you to work within an Agile/Scrum environment, collaborating with cross-functional teams to deliver high-quality data solutions. Moreover, your working knowledge of CI/CD pipelines, GitHub, Maven, and Nexus will be utilized for continuous integration, delivery, and version control. Your ability to write unit tests and follow best practices for scalable data pipelines will contribute to the successful execution of data projects in an enterprise-level distributed data system environment. Overall, you will play a crucial role in the development, optimization, and maintenance of data processing applications, ensuring high performance, reliability, and efficiency in data workflows within the Big Data platform. (Note: This job description is based on the provided information and is intended to give a standard summary description in the second person format. Headers have been omitted for the final JD.),
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
maharashtra
On-site
Job Description: As the Digital Transformation Lead at Godrej Agrovet Limited (GAVL) in Mumbai, you will play a crucial role in driving innovation and productivity in the agri-business sector. GAVL is dedicated to enhancing the livelihood of Indian farmers by developing sustainable solutions that enhance crop and livestock yields. With leading market positions in Animal Feed, Crop Protection, Oil Palm, Dairy, Poultry, and Processed Foods, GAVL is committed to making a positive impact on the agricultural industry. With an impressive annual sales figure of 6000 Crore INR in FY 18-19, GAVL has a widespread presence across India, offering high-quality feed and nutrition products for cattle, poultry, aqua feed, and specialty feed. The company operates 50 manufacturing facilities, has a network of 10,000 rural distributors/dealers, and employs over 2500 individuals. At GAVL, our people philosophy revolves around the concept of tough love. We set high expectations for our team members, recognizing and rewarding performance and potential through career growth opportunities. We prioritize the development, mentoring, and training of our employees, understanding that diverse interests and passions contribute to a strong team dynamic. We encourage individuals to explore their full potential and provide a supportive environment for personal and professional growth. In this role, you will utilize your expertise as a Data Scientist to extract insights from complex datasets, develop predictive models, and drive data-driven decisions across the organization. You will collaborate with various teams, including business, engineering, and product, to apply advanced statistical methods, machine learning techniques, and domain knowledge to address real-world challenges. Key Responsibilities: - Data Cleaning, Preprocessing & Exploration: Prepare and analyze data, ensuring quality and completeness by addressing missing values, outliers, and data transformations to identify patterns and anomalies. - Machine Learning Model Development: Build, train, and deploy machine learning models using tools like MLflow on the Databricks platform, exploring regression, classification, clustering, and time series analysis techniques. - Model Evaluation & Deployment: Enhance model performance through feature selection, leveraging distributed computing capabilities for efficient processing, and utilizing CI/CD tools for deployment automation. - Collaboration: Work closely with data engineers, analysts, and stakeholders to understand business requirements and translate them into data-driven solutions. - Data Visualization and Reporting: Create visualizations and dashboards to communicate insights to technical and non-technical audiences using tools like Databricks and Power BI. - Continuous Learning: Stay updated on the latest advancements in data science, machine learning, and industry best practices to enhance skills and processes. Required Technical Skills: - Proficiency in statistical analysis, hypothesis testing, and machine learning techniques. - Familiarity with NLP, time series analysis, computer vision, and A/B testing. - Strong knowledge of Databricks, Spark DataFrames, MLlib, and programming languages (Python, TensorFlow, Pandas, scikit-learn, PySpark, NumPy). - Proficient in SQL for data extraction, manipulation, and analysis, along with experience in MLflow and cloud data storage tools. Qualifications: - Education: Bachelors degree in Statistics, Mathematics, Computer Science, or a related field. - Experience: Minimum of 3 years in a data science or analytical role. Join us at Vikhroli, Mumbai, and be a part of our mission to drive digital transformation and innovation in the agricultural sector at Godrej Agrovet Limited.,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
As a Data Platform Engineer Lead at Barclays, your role is crucial in building and maintaining systems that collect, store, process, and analyze data, including data pipelines, data warehouses, and data lakes. Your responsibility includes ensuring the accuracy, accessibility, and security of all data. To excel in this role, you should have hands-on coding experience in Java or Python and a strong understanding of AWS development, encompassing various services such as Lambda, Glue, Step Functions, IAM roles, and more. Proficiency in building efficient data pipelines using Apache Spark and AWS services is essential. You are expected to possess strong technical acumen, troubleshoot complex systems, and apply sound engineering principles to problem-solving. Continuous learning and staying updated with new technologies are key attributes for success in this role. Design experience in diverse projects where you have led the technical development is advantageous, especially in the Big Data/Data Warehouse domain within Financial services. Additional skills in enterprise-level software solutions development, knowledge of different file formats like JSON, Iceberg, Avro, and familiarity with streaming services such as Kafka, MSK, and Kinesis are highly valued. Effective communication, collaboration with cross-functional teams, documentation skills, and experience in mentoring team members are also important aspects of this role. Your accountabilities will include the construction and maintenance of data architectures pipelines, designing and implementing data warehouses and data lakes, developing processing and analysis algorithms, and collaborating with data scientists to deploy machine learning models. You will also be expected to contribute to strategy, drive requirements for change, manage resources and policies, deliver continuous improvements, and demonstrate leadership behaviors if in a leadership role. Ultimately, as a Data Platform Engineer Lead at Barclays in Pune, you will play a pivotal role in ensuring data accuracy, accessibility, and security while leveraging your technical expertise and collaborative skills to drive innovation and excellence in data management.,
Posted 2 weeks ago
5.0 - 10.0 years
13 - 23 Lacs
Bhubaneswar, Hyderabad
Work from Office
Greetings From Finix !!! Experience required in Data Engineering or related roles. Strong expertise in Databricks and apache spark. Experience in EDA, SQL, GCP(Bigquery). Strong knowledge required in Python. we are looking for immediate joiners.
Posted 2 weeks ago
2.0 - 6.0 years
0 Lacs
ahmedabad, gujarat
On-site
You should be proficient in programming languages such as Python and R. You must have a strong understanding of machine learning algorithms and frameworks like TensorFlow, PyTorch, and scikit-learn. Experience with conversational AI platforms, especially Oracle Digital Assistant, is preferred. Additionally, you should have familiarity with data processing tools and platforms such as Apache Spark and Hadoop. It is essential to be familiar with Oracle Cloud Infrastructure (OCI) and its services for deploying ODA & AI/ML models. Experience with natural language processing (NLP) techniques and libraries like NLTK and SpaCy is required. Understanding deep learning architectures for NLP, such as transformers and BERT, is a plus. Knowledge of containerization and orchestration tools like Docker and Kubernetes is beneficial. Moreover, you should possess skills in statistical analysis and data visualization using the R-language. This position is full-time and requires working the morning shift. Ideally, candidates should have at least 2 years of experience in AI/ML. The work location is in person.,
Posted 2 weeks ago
7.0 - 11.0 years
0 Lacs
karnataka
On-site
As a skilled Senior Engineer at Impetus Technologies, you will utilize your expertise in Java and Big Data technologies to design, develop, and deploy scalable data processing applications. Your responsibilities will include collaborating with cross-functional teams, developing high-quality code, and optimizing data processing workflows. Additionally, you will mentor junior engineers and contribute to architectural decisions to enhance system performance and scalability. Key Responsibilities: - Design, develop, and maintain high-performance applications using Java and Big Data technologies. - Implement data ingestion and processing workflows with frameworks like Hadoop and Spark. - Collaborate with the data architecture team to define efficient data models. - Optimize existing applications for performance, scalability, and reliability. - Mentor junior engineers, provide technical leadership, and promote continuous improvement. - Participate in code reviews and ensure best practices for coding, testing, and documentation. - Stay up-to-date with technology trends in Java and Big Data, and evaluate new tools and methodologies. Skills and Tools Required: - Strong proficiency in Java programming for building complex applications. - Hands-on experience with Big Data technologies like Apache Hadoop, Apache Spark, and Apache Kafka. - Understanding of distributed computing concepts and technologies. - Experience with data processing frameworks and libraries such as MapReduce and Spark SQL. - Familiarity with database systems like HDFS, NoSQL databases (e.g., Cassandra, MongoDB), and SQL databases. - Strong problem-solving skills and the ability to troubleshoot complex issues. - Knowledge of version control systems like Git and familiarity with CI/CD pipelines. - Excellent communication and teamwork skills for effective collaboration. About the Role: You will be responsible for designing and developing scalable Java applications for Big Data processing, collaborating with cross-functional teams to implement innovative solutions, and ensuring code quality and performance through best practices and testing methodologies. About the Team: You will work with a diverse team of skilled engineers, data scientists, and product managers in a collaborative environment that encourages knowledge sharing and continuous learning. Technical workshops and brainstorming sessions will provide opportunities to enhance your skills and stay updated with industry trends. Responsibilities: - Developing and maintaining high-performance Java applications for efficient data processing. - Implementing data integration and processing frameworks using Big Data technologies. - Troubleshooting and optimizing systems to enhance performance and scalability. To succeed in this role, you should have: - Strong proficiency in Java and experience with Big Data technologies and frameworks. - Solid understanding of data structures, algorithms, and software design principles. - Excellent problem-solving skills and the ability to work independently and within a team. - Familiarity with cloud platforms and distributed computing concepts is a plus. Qualification: Bachelor's or Master's degree in Computer Science, Engineering, or related field. Experience: 7 to 10 years Job Reference Number: 13131,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
As a Senior Azure Data Engineer, you will be responsible for designing, building, and optimizing scalable data pipelines and solutions using Databricks and modern Azure data engineering tools. Your expertise in Databricks and Azure services will be crucial in delivering high-quality, secure, and efficient data platforms. Your key skills and expertise should include a strong hands-on experience with Databricks, proficiency in Azure Data Factory (ADF) for orchestrating ETL workflows, excellent programming skills in Python with advanced PySpark skills, solid understanding of Apache Spark internals and tuning, and expertise in SQL for writing complex queries and optimizing joins. You should also be familiar with data warehousing principles and modeling techniques and have knowledge of Azure data services like Data Lake Storage, Synapse Analytics, and SQL Database. In this role, you will design and implement robust, scalable, and efficient data pipelines using Databricks and ADF, leverage Unity Catalog for securing and governing sensitive data, optimize Databricks jobs and queries for speed, cost, and scalability, build and maintain Delta Lake tables and data models for analytics and BI, collaborate with stakeholders to define data needs and deliver business value, automate workflows to improve reliability and data quality, troubleshoot and monitor pipelines for uptime and data accuracy, and mentor junior engineers in best practices in Databricks and Azure data engineering. The ideal candidate should have at least 5 years of experience in data engineering with a focus on Azure, demonstrated ability to work with large-scale distributed systems, strong communication and teamwork skills, and certifications in Databricks and/or Azure Data Engineering would be a plus.,
Posted 2 weeks ago
7.0 - 11.0 years
0 Lacs
hyderabad, telangana
On-site
As a Software Engineer - Backend (Python) with 7+ years of experience, you will be responsible for designing and building the backend components of the GenAI Platform in Hyderabad. Your role will involve collaborating with geographically distributed cross-functional teams and participating in an on-call rotation to handle production incidents. The GenAI Platform offers safe, compliant, and cost-efficient access to LLMs, including Opensource & Commercial ones, while adhering to Experian standards and policies. You will work on building reusable tools, frameworks, and coding patterns for fine-tuning LLMs or developing RAG-based applications. To succeed in this role, you must possess the following skills: - 7+ years of professional backend web development experience with Python - Experience with AI and RAG - Proficiency in DevOps & IaC tools like Terraform, Jenkins - Familiarity with MLOps platforms such as AWS Sagemaker, Kubeflow, or MLflow - Expertise in web development frameworks such as Flask, Django, or FastAPI - Knowledge of concurrent programming designs like AsyncIO - Experience with public cloud platforms like AWS, Azure, GCP (preferably AWS) - Understanding of CI/CD practices, tools, and frameworks Additionally, the following skills would be considered nice to have: - Experience with Apache Kafka and developing Kafka client applications in Python - Familiarity with big data processing frameworks, especially Apache Spark - Knowledge of containers (Docker) and container platforms like AWS ECS or AWS EKS - Proficiency in unit and functional testing frameworks - Experience with various Python packaging options such as Wheel, PEX, or Conda - Understanding of metaprogramming techniques in Python Join our team and contribute to the development of cutting-edge technologies in a collaborative and dynamic environment.,
Posted 2 weeks ago
8.0 - 12.0 years
0 Lacs
noida, uttar pradesh
On-site
You will be responsible for building the most personalized and intelligent news experiences for India's next 750 million digital users. As Our Principal Data Engineer, your main tasks will include designing and maintaining data infrastructure to power personalization systems and analytics platforms. This involves ensuring seamless data flow from source to consumption, architecting scalable data pipelines to process massive volumes of user interaction and content data, and developing robust ETL processes for large-scale transformations and analytical processing. You will also be involved in creating and maintaining data lakes/warehouses that consolidate data from multiple sources, optimized for ML model consumption and business intelligence. Additionally, you will implement data governance practices and collaborate with the ML team to ensure the right data availability for recommendation systems. To excel in this role, you should have a Bachelor's or Master's degree in Computer Science, Engineering, Data Science, or a related field, along with 8-12 years of data engineering experience, including at least 3 years in a senior role. You must possess expert-level SQL skills and have strong experience in the Apache Spark ecosystem (Spark SQL, Streaming, SparkML), as well as proficiency in Python/Scala. Experience with the AWS data ecosystem (RedShift, S3, Glue, EMR, Kinesis, Lambda, Athena) and ETL frameworks (Glue, Airflow) is essential. A proven track record of building large-scale data pipelines in production environments, particularly in high-traffic digital media, will be advantageous. Excellent communication skills are also required, as you will need to collaborate effectively across teams in a fast-paced environment that demands engineering agility.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
hyderabad, telangana
On-site
As a Java/Scala Developer at our company located in Hyderabad, you will play a crucial role in developing and maintaining software applications. Your responsibilities will include implementing functional programming methodologies, engaging in test-driven development, and building microservices. Collaboration with cross-functional teams will be essential to ensure the delivery of high-quality software solutions. To excel in this role, you should possess advanced skills in one or more programming languages such as Java and Scala, along with database proficiency. Prior hands-on experience as a Scala/Spark developer is required. You should be able to self-rate your Scala expertise as a minimum of 8 out of 10 and demonstrate proficiency in Scala and Apache Spark development. Additionally, proficiency in automation and continuous delivery methods, as well as a deep understanding of the Software Development Life Cycle, are key requirements for this position. An advanced comprehension of agile methodologies like CI/CD, Application Resiliency, and Security will be beneficial. We are looking for a developer who has demonstrated expertise in software applications and technical processes within a specific technical discipline, such as cloud computing, artificial intelligence, machine learning, or mobile development. If you are passionate about software development and possess the necessary skills and experience, we encourage you to apply for this exciting opportunity.,
Posted 2 weeks ago
2.0 - 6.0 years
0 Lacs
navi mumbai, maharashtra
On-site
Sciative is on a mission to create the future of dynamic pricing powered by artificial intelligence and big data. Our Software-as-a-Service products are used globally across various industries - retail, ecommerce, travel, and entertainment. We are a fast growth-oriented startup, with an employee strength of 60+ employees based in a plush office at Navi Mumbai. With our amazing result-oriented product portfolio, we want to become the most customer-oriented company on the planet. To get there, we need exceptionally talented, bright, and driven people. If you'd like to help us build the place to find and buy anything online, this is your chance to make history. We are looking for a dynamic, organized self-starter to join our Tech Team. Responsibilities: - Collaborate with data scientists, software engineers, and business stakeholders to understand data requirements and design efficient data models. - Develop, implement, and maintain robust and scalable data pipelines, ETL processes, and data integration solutions. Extract, transform, and load data from various sources, ensuring data quality, integrity, and consistency. - Optimize data processing and storage systems to handle large volumes of structured and unstructured data efficiently. Perform data cleaning, normalization, and enrichment tasks to prepare datasets for analysis and modeling. - Monitor data flows and processes, identify and resolve data-related issues and bottlenecks. - Contribute to the continuous improvement of data engineering practices and standards within the organization. - Stay up-to-date with industry trends and emerging technologies in data engineering, artificial intelligence, and dynamic pricing. Candidate Profile: - Strong passion for data engineering, artificial intelligence, and problem-solving. - Solid understanding of data engineering concepts, data modeling, and data integration techniques. - Proficiency in programming languages such as Python, SQL, and Web Scraping. - Understanding of databases like NoSQL, relational database, In-Memory database, and technologies like MongoDB, Redis, Apache Spark would be an add-on. - Knowledge of distributed computing frameworks and big data technologies (e.g., Hadoop, Spark) is a plus. - Excellent analytical and problem-solving skills, with a keen eye for detail. - Strong communication and collaboration skills, with the ability to work effectively in a team-oriented environment. - Self-motivated, quick learner, and adaptable to changing priorities and technologies.,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
chennai, tamil nadu
On-site
The Content and Data Analytics team is a part of DataOps within Global Operations at Elsevier, focusing on providing data analysis services primarily using Databricks. The team primarily serves product owners and data scientists of Elsevier's Research Data Platform, contributing to the delivery of leading data analytics products for scientific research, including Scopus and SciVal. As a Senior Data Analyst at Elsevier, you are expected to have a solid understanding of best practices and the ability to execute projects and initiatives independently. You should be capable of creating advanced-level insights and recommendations, as well as leading analytics efforts with high complexity autonomously. Your responsibilities will include supporting data scientists within the Domains of the Research Data Platform, engaging in various analytical activities such as analyzing large datasets, performing data preparation, and reviewing data science algorithms. You must possess a keen eye for detail, strong analytical skills, expertise in at least one data analysis system, curiosity, dedication to quality work, and an interest in scientific research. The requirements for this role include a minimum of 5 years of work experience, coding skills in at least one programming language (preferably Python) and SQL, familiarity with string manipulation functions like regular expressions, prior exposure to data analysis tools such as Pandas or Apache Spark/Databricks, knowledge of basic statistics relevant to data science, and experience with visualization tools like Tableau/Power BI. You will be expected to build and maintain strong relationships with Data Scientists and Product Managers, align activities with stakeholders, and present achievements and project updates to various stakeholders. Key competencies for this role include collaborating effectively as part of a team, taking initiative in problem-solving, and driving tasks to successful conclusions. Elsevier offers various benefits to promote a healthy work-life balance, including well-being initiatives, shared parental leave, study assistance, and sabbaticals. Additionally, the company provides comprehensive health insurance, flexible working arrangements, employee assistance programs, modern family benefits, various paid time off options, and subsidized meals. The company prides itself on being a global leader in information and analytics, supporting science, research, health education, and interactive learning while addressing the world's challenges and fostering a sustainable future.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
pune, maharashtra
On-site
Job Description: As a Software and Data Science Engineer, you will be a crucial part of our team, bringing your strong engineering background and analytical skills to the table. Your main responsibility will be to develop and maintain our platform, utilizing large-scale data to extract valuable insights and drive business growth. Your role will involve working on both front-end and back-end components, requiring collaboration with various technical and non-technical stakeholders. In this role, we highly value your analytical mindset and problem-solving skills in utilizing data structures, storage systems, cloud infrastructure, front-end frameworks, and other technical tools to address complex technical challenges. Your curiosity and experience in working with large-scale data to tackle business problems will be a significant asset. Moreover, your ability to work efficiently within diverse teams and adapt to evolving objectives will be critical for success in this dynamic environment. To excel in this position, you must have a solid engineering background, preferably in Computer Science, Mathematics, Software Engineering, Physics, Data Science, or a related field. Proficiency in programming languages like Python, Java, C++, TypeScript/JavaScript, or similar will be essential. Additionally, experience in both front-end and back-end development, including frameworks like TypeScript, JavaScript, and technologies like Apache Spark, will be beneficial. Your responsibilities will include designing, building, and maintaining scalable platform solutions with a focus on end-to-end data pipelines. You will collaborate with cross-functional teams to address technical challenges, leverage large-scale data for decision-making, and contribute to best practices in software engineering and data management. Participating in code reviews, providing feedback to peers, and staying updated on emerging technologies and industry trends will also be part of your role. Join our team as a Software and Data Science Engineer and play a pivotal role in shaping the future of our platform through innovative solutions and data-driven insights.,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
jaipur, rajasthan
On-site
As an experienced data engineer specializing in dashboard story development and data engineering pipelines, you will play a crucial role in analyzing log data to extract actionable insights for product enhancements and feature optimization. With 5+ years of hands-on experience, you will collaborate with cross-functional teams to gather business requirements, translate them into technical specifications, and design interactive dashboards using tools like Tableau, Power BI, or ThoughtSpot AI. You will be responsible for managing large volumes of application log data using Google Big Query, ensuring data integrity, consistency, and accessibility for analytical purposes. Your expertise in identifying patterns, trends, and anomalies in log data will be instrumental in visualizing key metrics and insights to communicate findings effectively with customer success and leadership teams. In addition to your primary responsibilities, you will work closely with product teams to understand log data generated by Python-based applications, define key performance indicators (KPIs), and optimize data pipelines and storage in Big Query. Your strong communication, teamwork, problem-solving skills, and ability to learn quickly and adapt to new technologies will be essential in this role. Preferred qualifications include knowledge of Generative AI (GenAI) and LLM-based solutions, experience with ThoughtSpot AI, Google Cloud Platform (GCP), and modern data warehouse architectures. You will also have the opportunity to participate in proof-of-concepts (POCs) and pilot projects, articulate ideas clearly to the team, and take ownership of data analytics and engineering solutions. Additional nice-to-have qualifications include experience working with large datasets, distributed data processing tools like Apache Spark or Hadoop, familiarity with Agile development methodologies, and ETL tools such as Informatica or Azure Data Factory. This full-time position in the IT Services and IT Consulting industry offers a dynamic environment where you can leverage your skills to drive meaningful business outcomes.,
Posted 2 weeks ago
8.0 - 10.0 years
3 - 7 Lacs
Bengaluru
Work from Office
Must have : - Strong on programming languages like Python, Java - One cloud hands-on experience (GCP preferred) - Experience working with Dockers - Environments managing (e.g venv, pip, poetry, etc.) - Experience with orchestrators like Vertex AI pipelines, Airflow, etc. - Understanding of full ML Cycle end-to-end - Data engineering, Feature Engineering techniques - Experience with ML modelling and evaluation metrics - Experience with Tensorflow, Pytorch or another framework - Experience with Models monitoring - Advance SQL knowledge - Aware of Streaming concepts like Windowing , Late arrival , Triggers etc. Good to have : - Hyperparameter tuning experience. - Proficient in either Apache Spark or Apache Beam or Apache Flink. - Should have hands-on experience on Distributed computing. - Should have working experience on Data Architecture design. - Should be aware of storage and compute options and when to choose what. - Should have good understanding on Cluster Optimisation/ Pipeline Optimisation strategies. - Should have exposure on GCP tools to develop end to end data pipeline for various scenarios (including ingesting data from traditional data bases as well as integration of API based data sources). - Should have Business mindset to understand data and how it will be used for BI and Analytics purposes. - Should have working experience on CI/CD pipelines, Deployment methodologies, Infrastructure as a code (eg. Terraform). - Hands-on experience on Kubernetes. - Vector based Database like Qdrant. - LLM experience (embeddings generation, embeddings indexing, RAG, Agents, etc.). Key Responsibilities : - Design, develop, and implement AI models and algorithms using Python and Large Language Models (LLMs). - Collaborate with data scientists, engineers, and business stakeholders to define project requirements and deliver impactful AI-driven solutions. - Optimize and manage data pipelines, ensuring efficient data storage and retrieval with PostgreSQL. - Continuously research emerging AI trends and best practices to enhance model performance and capabilities. - Deploy, monitor, and maintain AI applications in production environments, adhering to best industry standards. - Document technical designs, workflows, and processes to facilitate clear knowledge transfer and project continuity. - Communicate technical concepts effectively to both technical and non-technical team members. Required Skills and Qualifications : - Proven expertise in Python programming for AI/ML applicati
Posted 2 weeks ago
4.0 - 9.0 years
4 - 9 Lacs
Gurugram
Work from Office
As a Mid Databricks Engineer, you will play a pivotal role in designing, implementing, and optimizing data processing pipelines and analytics solutions on the Databricks platform. You will collaborate closely with cross-functional teams to understand business requirements, architect scalable solutions, and ensure the reliability and performance of our data infrastructure. This role requires deep expertise in Databricks, strong programming skills, and a passion for solving complex engineering challenges. What you'll do : - Design and develop data processing pipelines and analytics solutions using Databricks. - Architect scalable and efficient data models and storage solutions on the Databricks platform. - Collaborate with architects and other teams to migrate current solution to use Databricks. - Optimize performance and reliability of Databricks clusters and jobs to meet SLAs and business requirements. - Use best practices for data governance, security, and compliance on the Databricks platform. - Mentor junior engineers and provide technical guidance. - Stay current with emerging technologies and trends in data engineering and analytics to drive continuous improvement. You'll be expected to have : - Bachelor's or master's degree in computer science, Engineering, or a related field. - 5 to 8 years of overall experience and 2+ years of experience designing and implementing data solutions on the Databricks platform. - Proficiency in programming languages such as Python, Scala, or SQL. - Strong understanding of distributed computing principles and experience with big data technologies such as Apache Spark. - Experience with cloud platforms such as AWS, Azure, or GCP, and their associated data services. - Proven track record of delivering scalable and reliable data solutions in a fast-paced environment. - Excellent problem-solving skills and attention to detail. - Strong communication and collaboration skills with the ability to work effectively in cross-functional teams. - Good to have experience with containerization technologies such as Docker and Kubernetes. - Knowledge of DevOps practices for automated deployment and monitoring of data pipelines.
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
pune, maharashtra
On-site
You will be joining our data engineering team as an experienced Python + Databricks Developer. Your role will involve designing, developing, and maintaining scalable data pipelines using Databricks and Apache Spark. You will write efficient Python code for data transformation, cleansing, and analytics. Collaboration with data scientists, analysts, and engineers to understand data needs and deliver high-performance solutions is a key part of this role. Additionally, you will optimize and tune data pipelines for performance and cost efficiency and implement data validation, quality checks, and monitoring. Working with cloud platforms, preferably Azure or AWS, to manage data workflows will also be part of your responsibilities. Ensuring best practices in code quality, version control, and documentation is essential for this role. To be successful in this position, you should have at least 5 years of professional experience in Python development and 3 years of hands-on experience with Databricks, including notebooks, clusters, Delta Lake, and job orchestration. Strong experience with Spark, particularly PySpark, is required. Proficiency in working with large-scale data processing and ETL/ELT pipelines is necessary, along with a solid understanding of data warehousing concepts and SQL. Experience with Azure Data Factory, AWS Glue, or other data orchestration tools would be advantageous. Familiarity with version control tools like Git is also desired. Excellent problem-solving and communication skills are important for this role.,
Posted 2 weeks ago
8.0 - 13.0 years
18 - 33 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Data Modular: JD details Work Location: Bangalore/Chenna/Hyderabad/Gurgaon/Pune Mode of work: Hybrid Exp Level- 7+ Years Looking only for immediate joiners. Experience with RDBMS, No-SQL (Columnar DB like Apache Parquet, Apache Kylin or alike) - Query optimization, performance tuning, caching and filtering strategies. Experience with Data lakes and faster retrieving processes and techniques Dynamic data modelling - enable updated data models based on underlying data Caching and filtering techniques on data Experience with Apache Spark or similar big data technologies Knowledge on AWS - IaaC implementation SQL transpilers and predicate pushing The top layer is GraphQL - good to have
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
Changing the world through digital experiences is what Adobe is all about. We give everyone - from emerging artists to global brands - everything they need to design and deliver exceptional digital experiences! We are passionate about empowering people to create beautiful and powerful images, videos, and apps, transforming how companies interact with customers across every screen. We are on a mission to hire the very best and are committed to creating exceptional employee experiences where everyone is respected and has access to equal opportunity. We realize that new ideas can come from everywhere in the organization, and we know the next big idea could be yours! Role Summary: Digital Experience (DX) is a USD 4B+ business serving the needs of enterprise businesses, including 95%+ of Fortune 500 organizations. Adobe Marketo Engage, within Adobe DX, the leading marketing automation platform, helps businesses engage customers effectively through various surfaces and touchpoints. We are looking for strong and passionate engineers to join our team as we scale the business by building next-gen products and contributing to our existing offerings. If you're passionate about innovative technology, then we would be excited to talk to you! What You'll Do: - Collaborate with architects, product management, and engineering teams to build solutions that increase the product's value. - Develop technical specifications, prototypes, and presentations to communicate your ideas. - Stay proficient in emerging industry technologies and trends, communicating that knowledge to the team and using it to influence product direction. - Demonstrate exceptional coding skills by writing unit tests, ensuring code quality, and code coverage. - Ensure code is always checked in and that source control standards are followed. What You Need to Succeed: - 5+ years of experience in software development. - Expertise in Java, Spring Boot, Rest Services, MySQL or Postgres, MongoDB. - Good working knowledge of Azure ecosystem and Azure data factory. - Good understanding of working with Cassandra, Solr, ElasticSearch, Snowflake. - Ambitious and not afraid to tackle unknowns, demonstrating a strong bias to action. - Knowledge in Apache Spark and Scala is an added advantage. - Strong interpersonal, analytical, problem-solving, and conflict resolution skills. - Excellent speaking, writing, and presentation skills, as well as the ability to persuade, encourage, and empower others. - Bachelors/Masters in Computer Science or a related field. Adobe aims to make Adobe.com accessible to any and all users. If you have a disability or special need that requires accommodation to navigate our website or complete the application process, email accommodations@adobe.com or call (408) 536-3015.,
Posted 2 weeks ago
10.0 - 14.0 years
0 Lacs
chennai, tamil nadu
On-site
The Applications Development Group Manager is a senior management level position responsible for accomplishing results through the management of a team or department in an effort to establish and implement new or revised application systems and programs in coordination with the Technology Team. The overall objective of this role is to drive applications systems analysis and programming activities. Manage multiple teams of professionals to accomplish established goals and conduct personnel duties for team (e.g. performance evaluations, hiring and disciplinary actions). Provide strategic influence and exercise control over resources, budget management and planning while monitoring end results. Utilize in-depth knowledge of concepts and procedures within own area and basic knowledge of other areas to resolve issues. Ensure essential procedures are followed and contribute to defining standards. Integrate in-depth knowledge of applications development with overall technology function to achieve established goals. Provide evaluative judgement based on analysis of facts in complicated, unique, and dynamic situations including drawing from internal and external sources. Influence and negotiate with senior leaders across functions, as well as communicate with external parties as necessary. Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency, as well as effectively supervise the activity of others and create accountability with those who fail to maintain these standards. Qualifications: - 10+ years of relevant experience - Experience in applications development - Experience in management - Experience managing global technology teams - Working knowledge of industry practices and standards - Consistently demonstrates clear and concise written and verbal communication Education: - Bachelors degree/University degree or equivalent experience - Masters degree preferred Required Skills: - Programming skills including concurrent, parallel and distributed systems programming - Expert level knowledge of Java - Expert level experience with HTTP, ReSTful web services and API design - Messaging technologies (Kafka) - Experience with Bigdata technologies Developer Hadoop, Apache Spark, Python, PySpark - Experience with Reactive Streams Desirable Skills: - Messaging technologies - Familiarity with Hadoop SQL interfaces like Hive, Spark SQL, etc. - Experience with Kubernetes - Good understanding of the Linux OS - Experience with Gradle, Maven would be beneficial Note: For complementary skills, please see the job requirements above and/or contact the recruiter. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citis EEO Policy Statement and the Know Your Rights poster.,
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough