Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
8.0 - 12.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
At Porter, we are passionate about improving productivity. We want to help businesses, large and small, optimize their last-mile operations and empower them to unleash the growth of their core functions. Last-mile delivery logistics is one of the biggest and fastest growing sectors of the economy with a market cap upwards of 50 billion USD and a growth rate exceeding 15 % CAGR. Porter is the fastest-growing leader in this sector with operations in major cities, a fleet size exceeding 1L registered and 50k active driver-partners, and a customer base with 3.5M being monthly active. Our industry-best technology platform has raised over 150 million USD from investors including Sequoia Capital, Kae Capital, Mahindra group, LGT Aspada, Tiger Global, and Vitruvian Partners. We are addressing a massive problem and going after a huge market. We're trying to create a household name in transportation and our ambition is to disrupt all facets of the supply chain. At Porter, we're here to do the best work of our lives. If you want to do the same and love the challenges and opportunities of a fast-paced work environment, then we believe Porter is the right place for you. Responsibilities: Business and Product Alignment. Work closely with business and product stakeholders to identify and pursue opportunities which engineering can contribute to. Identify broad OKRs for the team which aligns with the organization, business unit, and product goals. Formalize business requirements into tangible sprints. Vision, Strategy, and Planning. Drive the adoption of technical frameworks and specific technologies. Drive technical re-design, remodeling, and refactoring of systems for robustness and sustainability. Cost planning for systems. Performance monitoring and optimization for systems. Team roadmap planning. Drive high-level design for projects. Drive adoption of guiding design frameworks like domain-driven design, clean architecture, etc., resulting in a robust layered architecture. Focus also on the non-functional aspects of design, including but not limited to database design, communication protocols, deployment systems, etc. Code Review - Drive good code review practices, using the review process as a mentoring tool to upskill people. Project Management. Drive the execution of the project by delegating tasks effectively. Resolve blockers through technical expertise and negotiation. Estimate timelines and ensure adherence to them through effective sync. Team Management. Manage a team of 5 - 6 members. Do periodic assessments of reported, chart their career growth paths, and train them. Recruit members into the team by collaborating effectively with the recruitment team. Qualifications: Bachelor's Degree, Master's Degree. B. Tech/M. Tech from a top-tier college. Spring Boot, Ruby on Rails, Node.js, Java Play, AWS Lambda. Kotlin, Java, Ruby, Javascript, Python. PostgreSQL, Aerospike, Redis, DynamoDB, Amazon Redshift. HTTP, Amazon SQS, Sidekiq, Amazon SNS. Amazon ECS, Docker. Domain Driven Design, Clean Architecture, Layered Architecture. Have 8-12 years of relevant engineering work experience, which includes at least 2 hands-on technical management experiences. Have experience leading a technical team of at least 3 members across different technical operations. Have a strong technical background and can contribute to design and development across the software product lifecycle. Familiar with our tech stack (Python/ Django/ MySQL/ Oracle/ NoSQL/ AngularJS/ Linux). Familiar with the use of Containers like Docker. Familiar with implementing organizational processes using tools like Asana/ Git/ Docs/ Slack/ etc. Show more Show less
Posted 2 weeks ago
4.0 years
0 Lacs
India
Remote
Job Title: Data Scientist Location: Remote Job Type: Full-Time | Permanent Experience Required: 4+ Years About the Role: We are looking for a highly motivated and analytical Data Scientist with 4 years of industry experience to join our data team. The ideal candidate will have a strong background in Python , SQL , and experience deploying machine learning models using AWS SageMaker . You will be responsible for solving complex business problems with data-driven solutions, developing models, and helping scale machine learning systems into production environments. Key Responsibilities: Model Development: Design, develop, and validate machine learning models for classification, regression, and clustering tasks. Work with structured and unstructured data to extract actionable insights and drive business outcomes. Deployment & MLOps: Deploy machine learning models using AWS SageMaker , including model training, tuning, hosting, and monitoring. Build reusable pipelines for model deployment, automation, and performance tracking. Data Exploration & Feature Engineering: Perform data wrangling, preprocessing, and feature engineering using Python and SQL . Conduct EDA (exploratory data analysis) to identify patterns and anomalies. Collaboration: Work closely with data engineers, product managers, and business stakeholders to define data problems and deliver scalable solutions. Present model results and insights to both technical and non-technical audiences. Continuous Improvement: Stay updated on the latest advancements in machine learning, AI, and cloud technologies. Suggest and implement best practices for experimentation, model governance, and documentation. Required Skills & Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Science, Statistics, or related field. 4+ years of hands-on experience in data science, machine learning, or applied AI roles. Proficiency in Python for data analysis, model development, and scripting. Strong SQL skills for querying and manipulating large datasets. Hands-on experience with AWS SageMaker , including model training, deployment, and monitoring. Solid understanding of machine learning algorithms and techniques (supervised/unsupervised). Familiarity with libraries such as Pandas, NumPy, Scikit-learn, Matplotlib, and Seaborn. Preferred Qualifications (Nice to Have): Experience with MLOps tools (e.g., MLflow, SageMaker Pipelines). Exposure to deep learning frameworks like TensorFlow or PyTorch. Knowledge of AWS data ecosystem (e.g., S3, Redshift, Athena). Experience in A/B testing or statistical experimentation Show more Show less
Posted 2 weeks ago
8.0 - 12.0 years
0 Lacs
India
Remote
Job Title: Senior Big Data SME (Subject Matter Expert) Location : Remote Budget : Upto 28 LPA to 30 LPA Work Hours: UK Time (1:30 PM – 10:30 PM IST) Industry : Technology / IT Experience : 8 to 12 years in Data Engineering, Big Data, or related roles About the Role We are hiring a Senior Big Data Subject Matter Expert (SME) to support and guide ongoing cloud data initiatives, with a focus on mentorship, project support, and hands-on training in modern Big Data tools and technologies. This role is ideal for someone with deep technical experience who enjoys coaching teams, troubleshooting data platform issues, and enabling engineers to grow in real-world cloud projects. You’ll collaborate with engineers, architects, and leadership to ensure best practices in cloud data solutions and smooth delivery across projects. Key Responsibilities Provide technical support and guidance across Big Data platforms in Azure, AWS, or GCP. Train and mentor engineers on Big Data tools (Spark, Kafka, Hadoop, etc.). Assist project teams with architecture design, deployment, and debugging of data pipelines. Collaborate with cross-functional teams to ensure operational excellence and platform stability. Review and improve existing cloud data pipelines, focusing on performance, cost-efficiency, and scalability. Conduct regular knowledge-sharing sessions, workshops, and best practice walkthroughs. Help define and implement data governance, access control, and security frameworks. Technical Skills Required Cloud Platforms: Azure, AWS, GCP (at least 2 preferred) Big Data Tools: Apache Spark, Kafka, Hadoop, Flink ETL Tools: DBT, Apache Airflow, AWS Glue Data Warehousing: Snowflake, BigQuery, Redshift, Synapse Containerization & Orchestration: Docker, Kubernetes (AKS, EKS, GKE) CI/CD & IaC: Terraform, GitHub Actions, Azure DevOps Security & Governance: IAM, RBAC, data encryption, lineage tracking Programming/Scripting: Python, Bash, PowerShell Preferred (Nice-to-Have) Exposure to Machine Learning pipelines and MLOps Experience with serverless computing (AWS Lambda, Azure Functions) Understanding of multi-cloud or hybrid-cloud architectures Show more Show less
Posted 2 weeks ago
7.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Hiring for AWS Data Engineer with Fast API Immediate Joiners Pune and Chennai location 7-10 years experience Share profiles to neha.sandurea@godoublu.com We are seeking a skilled and motivated AWS Data Engineer with expertise in FastAPI, Pub/Sub messaging systems, and Apache Airflow to build and maintain scalable, cloud-native applications on AWS. The ideal candidate has strong experience in modern Python development and is having strong hands-on experience with event-driven architectures and data workflow orchestration in AWS cloud environments. Required Qualifications: Bachelor’s degree in computer science, data science, or a related technical discipline. 7+ years of hands-on experience in data engineering, include developing ETL/ElT data pipeline, API Integration (Fast API Preferred), data platform/products and or data warehouse. 3+ years of hands-on experience in developing data-intensive solutions on AWS for operational and analytics workloads. 3+ years of experience in designing both ETL/ELT for batch processing and data streaming architectures for real-time or near real-time data ingestion and processing. 3+ years of experience in develop and orchestrate complex data workflows suing Apache Airflow (Mandatory), including DAG Authoring, scheduling, and monitoring. 2+ years of experience in building and managing event-driven microservices using Pub Sub systems (e.g. AWS SNS/SQL , Kafka) 3+ years of hands-on experience in two or more database technologies (e.g., MySQL, PostgreSQL, MongoDB) and data warehouses (e.g., Redshift, BigQuery, or Snowflake), as well as cloud-based data engineering technologies. Proficient in Dashboard/BI and Data visualization tools (eg. Tableau, Quicksight) Develop conceptual, logical, and physical data models using ERDs. Thrives in dynamic, cross-functional team environments. Possesses a team-first mindset, valuing diverse perspectives and contributing to a collaborative work culture. Approaches challenges with a positive and can-do attitude. Willing to challenge the status quo, demonstrating ability to understand when and how to take appropriate risks to drive performance. A passionate problem solver. High learning agility Show more Show less
Posted 2 weeks ago
0.0 years
0 Lacs
Bengaluru, Karnataka
On-site
- 1+ years of data engineering experience - Experience writing and optimizing SQL queries with large-scale, complex datasets - Experience with data modeling, warehousing and building ETL pipelines - Experience with one or more scripting language (e.g., Python, KornShell) - Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala) IN Data Engineering & Analytics(IDEA) Team is looking to hire a rock star Data Engineer to build and manage the largest petabyte-scale data infrastructure in India for Amazon India businesses. IN Data Engineering & Analytics (IDEA) team is the central Data engineering and Analytics team for all A.in businesses. The team's charter includes 1) Providing Unified Data and Analytics Infrastructure (UDAI) for all A.in teams which includes central Petabyte-scale Redshift data warehouse, analytics infrastructure and frameworks for visualizing and automating generation of reports & insights and self-service data applications for ingesting, storing, discovering, processing & querying of the data 2) Providing business specific data solutions for various business streams like Payments, Finance, Consumer & Delivery Experience. The Data Engineer will play a key role in being a strong owner of our Data Platform. He/she will own and build data pipelines, automations and solutions to ensure the availability, system efficiency, IMR efficiency, scaling, expansion, operations and compliance of the data platform that serves 200 + IN businesses. The role sits in the heart of technology & business worlds and provides opportunity for growth, high business impact and working with seasoned business leaders. An ideal candidate will be someone with sound technical background in managing large data infrastructures, working with petabyte-scale data, building scalable data solutions/automations and driving operational excellence. An ideal candidate will be someone who is a self-starter that can start with a Platform requirement & work backwards to conceive and devise best possible solution, a good communicator while driving customer interactions, a passionate learner of new technology when the need arises, a strong owner of every deliverable in the team, obsessed with customer delight, business impact and ‘gets work done’ in business time. Key job responsibilities 1. Design/implement automation and manage our massive data infrastructure to scale for the analytics needs of Amazon IN. 2. Build solutions to achieve BAA(Best At Amazon) standards for system efficiency, IMR efficiency, data availability, consistency & compliance. 3. Enable efficient data exploration, experimentation of large datasets on our data platform and implement data access control mechanisms for stand-alone datasets 4. Design and implement scalable and cost effective data infrastructure to enable Non-IN(Emerging Marketplaces and WW) use cases on our data platform 5. Interface with other technology teams to extract, transform, and load data from a wide variety of data sources using SQL, Amazon and AWS big data technologies 6. Must possess strong verbal and written communication skills, be self-driven, and deliver high quality results in a fast-paced environment. 7. Drive operational excellence strongly within the team and build automation and mechanisms to reduce operations 8. Enjoy working closely with your peers in a group of very smart and talented engineers. A day in the life India Data Engineering and Analytics (IDEA) team is central data engineering team for Amazon India. Our vision is to simplify and accelerate data driven decision making for Amazon India by providing cost effective, easy & timely access to high quality data. We achieve this by providing UDAI (Unified Data & Analytics Infrastructure for Amazon India) which serves as a central data platform and provides data engineering infrastructure, ready to use datasets and self-service reporting capabilities. Our core responsibilities towards India marketplace include a) providing systems(infrastructure) & workflows that allow ingestion, storage, processing and querying of data b) building ready-to-use datasets for easy and faster access to the data c) automating standard business analysis / reporting/ dash-boarding d) empowering business with self-service tools to manage data and generate insights. Experience with big data technologies such as: Hadoop, Hive, Spark, EMR Experience with any ETL tool like, Informatica, ODI, SSIS, BODI, Datastage, etc. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 2 weeks ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Role Expectations Work with cross-functional teams to gather business needs and build data models and analytical solutions. Act as the main liaison for business teams to understand and solve their data needs. Design, build, and manage ETL pipelines for smooth data flow. Write complex SQL queries to extract, clean, and analyze data. Use Python (Pandas, NumPy) for automating data workflows and performing EDA. Work with AWS tools like S3, Redshift, Glue, Lambda for data storage and processing. Create dashboards and reports using tools like Tableau or Metabase. Conduct deep-dive analyses to uncover business insights and growth opportunities. Ensure data quality, accuracy, and security. Show more Show less
Posted 2 weeks ago
4.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Job Description Senior Data Migration Engineer About Oracle FSGIU - Finergy: The Finergy division within Oracle FSGIU is dedicated to the Banking, Financial Services, and Insurance (BFSI) sector. We offer deep industry knowledge and expertise to address the complex financial needs of our clients. With proven methodologies that accelerate deployment and personalization tools that create loyal customers, Finergy has established itself as a leading provider of end-to-end banking solutions. Our single platform for a wide range of banking services enhances operational efficiency, and our expert consulting services ensure technology aligns with our clients' business goals. Job Summary: We are seeking a skilled Senior Data Migration Engineer with expertise in AWS, Databricks, Python, PySpark, and SQL to lead and execute complex data migration projects. The ideal candidate will design, develop, and implement data migration solutions to move large volumes of data from legacy systems to modern cloud-based platforms, ensuring data integrity, accuracy, and minimal downtime. Job Responsibilities Software Development: Design, develop, test, and deploy high-performance and scalable data solutions using Python, PySpark, SQL Collaborate with cross-functional teams to understand business requirements and translate them into technical specifications. Implement efficient and maintainable code using best practices and coding standards. AWS & Databricks Implementation: Work with Databricks platform for big data processing and analytics. Develop and maintain ETL processes using Databricks notebooks. Implement and optimize data pipelines for data transformation and integration. Utilize AWS services (e.g., S3, Glue, Redshift, Lambda) and Databricks to build and optimize data migration pipelines. Leverage PySpark for large-scale data processing and transformation tasks. Continuous Learning: Stay updated on the latest industry trends, tools, and technologies related to Python, SQL, and Databricks. Share knowledge with the team and contribute to a culture of continuous improvement. SQL Database Management: Utilize expertise in SQL to design, optimize, and maintain relational databases. Write complex SQL queries for data retrieval, manipulation, and analysis. Qualifications & Skills: Education: Bachelor’s degree in Computer Science, Engineering, Data Science, or a related field. Advanced degrees are a plus. 4 to 8 Years of experience in Databricks and big data frameworks Proficient in AWS services and data migration Experience in Unity Catalogue Familiarity with Batch and real time processing Data engineering with strong skills in Python, PySpark, SQL Certifications: AWS Certified Solutions Architect, Databricks Certified Professional, or similar are a plus. Soft Skills: Strong problem-solving and analytical skills. Excellent communication and collaboration abilities. Ability to work in a fast-paced, agile environment. Responsibilities Job Responsibilities Software Development: Design, develop, test, and deploy high-performance and scalable data solutions using Python, PySpark, SQL Collaborate with cross-functional teams to understand business requirements and translate them into technical specifications. Implement efficient and maintainable code using best practices and coding standards. AWS & Databricks Implementation: Work with Databricks platform for big data processing and analytics. Develop and maintain ETL processes using Databricks notebooks. Implement and optimize data pipelines for data transformation and integration. Utilize AWS services (e.g., S3, Glue, Redshift, Lambda) and Databricks to build and optimize data migration pipelines. Leverage PySpark for large-scale data processing and transformation tasks. Continuous Learning: Stay updated on the latest industry trends, tools, and technologies related to Python, SQL, and Databricks. Share knowledge with the team and contribute to a culture of continuous improvement. SQL Database Management: Utilize expertise in SQL to design, optimize, and maintain relational databases. Write complex SQL queries for data retrieval, manipulation, and analysis. Qualifications Career Level - IC2 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
Indore, Madhya Pradesh, India
Remote
🔥 Senior Backend Engineer - Java ( WFH/Remote ) This is full time remote working opportunity If you are interested and fulfill the below mentioned criteria then share below information 1. Email id 2. Years of Relevant experience 3. Notice period 4. CCTC, ECTC Must Haves : Notice period of less than 1 Month 5+ years in web development in similar environments Bachelor’s degree in computer science, information security, or a related technology field In-depth and demonstrable knowledge of Java 8 and 17, Spring, Spring boot Experience with microservices and events Great experience and passion for creating documentation for code and business processes Expert in architectural design and code review with a strong knowledge of SOLID principles Expert in gathering and navigating complex requirements and business process Contribute to the development of our internal tools and reusable architecture. Experience creating optimized code and performance improvement for production systems and applications Experience debugging, refactoring applications, and replicating scenarios to solve issues and understand the business. Unit and system testing frameworks, familiarity with Junit, Mockito. Must have Git experience. Dedicated: own the apps you and your team are developing and take quality very seriously. Problem Solving: proactively solve problems before they can become real problems. constantly upgrading your skill set and applying those practices Main Tasks : Be part of the small team that’s developing multi-cloud platform services Build and maintain automation frameworks to execute developer-written tests in private and public cloud environments Optimize the code, ensure best coding practices are followed, and support the existing team in their technical hurdles Monitor and support the service providers using the app in the field Nice to Haves : Experience with Test Driven Development Experience with logistics software (delivery, transportation, route planning), RSA domain Experience with AWS, like ECS, SNS, SQS, and RedShift. Show more Show less
Posted 2 weeks ago
4.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Description Senior Data Migration Engineer About Oracle FSGIU - Finergy: The Finergy division within Oracle FSGIU is dedicated to the Banking, Financial Services, and Insurance (BFSI) sector. We offer deep industry knowledge and expertise to address the complex financial needs of our clients. With proven methodologies that accelerate deployment and personalization tools that create loyal customers, Finergy has established itself as a leading provider of end-to-end banking solutions. Our single platform for a wide range of banking services enhances operational efficiency, and our expert consulting services ensure technology aligns with our clients' business goals. Job Summary: We are seeking a skilled Senior Data Migration Engineer with expertise in AWS, Databricks, Python, PySpark, and SQL to lead and execute complex data migration projects. The ideal candidate will design, develop, and implement data migration solutions to move large volumes of data from legacy systems to modern cloud-based platforms, ensuring data integrity, accuracy, and minimal downtime. Job Responsibilities Software Development: Design, develop, test, and deploy high-performance and scalable data solutions using Python, PySpark, SQL Collaborate with cross-functional teams to understand business requirements and translate them into technical specifications. Implement efficient and maintainable code using best practices and coding standards. AWS & Databricks Implementation: Work with Databricks platform for big data processing and analytics. Develop and maintain ETL processes using Databricks notebooks. Implement and optimize data pipelines for data transformation and integration. Utilize AWS services (e.g., S3, Glue, Redshift, Lambda) and Databricks to build and optimize data migration pipelines. Leverage PySpark for large-scale data processing and transformation tasks. Continuous Learning: Stay updated on the latest industry trends, tools, and technologies related to Python, SQL, and Databricks. Share knowledge with the team and contribute to a culture of continuous improvement. SQL Database Management: Utilize expertise in SQL to design, optimize, and maintain relational databases. Write complex SQL queries for data retrieval, manipulation, and analysis. Qualifications & Skills: Education: Bachelor’s degree in Computer Science, Engineering, Data Science, or a related field. Advanced degrees are a plus. 4 to 8 Years of experience in Databricks and big data frameworks Proficient in AWS services and data migration Experience in Unity Catalogue Familiarity with Batch and real time processing Data engineering with strong skills in Python, PySpark, SQL Certifications: AWS Certified Solutions Architect, Databricks Certified Professional, or similar are a plus. Soft Skills: Strong problem-solving and analytical skills. Excellent communication and collaboration abilities. Ability to work in a fast-paced, agile environment. Responsibilities Job Responsibilities Software Development: Design, develop, test, and deploy high-performance and scalable data solutions using Python, PySpark, SQL Collaborate with cross-functional teams to understand business requirements and translate them into technical specifications. Implement efficient and maintainable code using best practices and coding standards. AWS & Databricks Implementation: Work with Databricks platform for big data processing and analytics. Develop and maintain ETL processes using Databricks notebooks. Implement and optimize data pipelines for data transformation and integration. Utilize AWS services (e.g., S3, Glue, Redshift, Lambda) and Databricks to build and optimize data migration pipelines. Leverage PySpark for large-scale data processing and transformation tasks. Continuous Learning: Stay updated on the latest industry trends, tools, and technologies related to Python, SQL, and Databricks. Share knowledge with the team and contribute to a culture of continuous improvement. SQL Database Management: Utilize expertise in SQL to design, optimize, and maintain relational databases. Write complex SQL queries for data retrieval, manipulation, and analysis. Qualifications Career Level - IC2 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Show more Show less
Posted 2 weeks ago
4.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Job Description Senior Data Migration Engineer About Oracle FSGIU - Finergy: The Finergy division within Oracle FSGIU is dedicated to the Banking, Financial Services, and Insurance (BFSI) sector. We offer deep industry knowledge and expertise to address the complex financial needs of our clients. With proven methodologies that accelerate deployment and personalization tools that create loyal customers, Finergy has established itself as a leading provider of end-to-end banking solutions. Our single platform for a wide range of banking services enhances operational efficiency, and our expert consulting services ensure technology aligns with our clients' business goals. Job Summary: We are seeking a skilled Senior Data Migration Engineer with expertise in AWS, Databricks, Python, PySpark, and SQL to lead and execute complex data migration projects. The ideal candidate will design, develop, and implement data migration solutions to move large volumes of data from legacy systems to modern cloud-based platforms, ensuring data integrity, accuracy, and minimal downtime. Job Responsibilities Software Development: Design, develop, test, and deploy high-performance and scalable data solutions using Python, PySpark, SQL Collaborate with cross-functional teams to understand business requirements and translate them into technical specifications. Implement efficient and maintainable code using best practices and coding standards. AWS & Databricks Implementation: Work with Databricks platform for big data processing and analytics. Develop and maintain ETL processes using Databricks notebooks. Implement and optimize data pipelines for data transformation and integration. Utilize AWS services (e.g., S3, Glue, Redshift, Lambda) and Databricks to build and optimize data migration pipelines. Leverage PySpark for large-scale data processing and transformation tasks. Continuous Learning: Stay updated on the latest industry trends, tools, and technologies related to Python, SQL, and Databricks. Share knowledge with the team and contribute to a culture of continuous improvement. SQL Database Management: Utilize expertise in SQL to design, optimize, and maintain relational databases. Write complex SQL queries for data retrieval, manipulation, and analysis. Qualifications & Skills: Education: Bachelor’s degree in Computer Science, Engineering, Data Science, or a related field. Advanced degrees are a plus. 4 to 8 Years of experience in Databricks and big data frameworks Proficient in AWS services and data migration Experience in Unity Catalogue Familiarity with Batch and real time processing Data engineering with strong skills in Python, PySpark, SQL Certifications: AWS Certified Solutions Architect, Databricks Certified Professional, or similar are a plus. Soft Skills: Strong problem-solving and analytical skills. Excellent communication and collaboration abilities. Ability to work in a fast-paced, agile environment. Responsibilities Job Responsibilities Software Development: Design, develop, test, and deploy high-performance and scalable data solutions using Python, PySpark, SQL Collaborate with cross-functional teams to understand business requirements and translate them into technical specifications. Implement efficient and maintainable code using best practices and coding standards. AWS & Databricks Implementation: Work with Databricks platform for big data processing and analytics. Develop and maintain ETL processes using Databricks notebooks. Implement and optimize data pipelines for data transformation and integration. Utilize AWS services (e.g., S3, Glue, Redshift, Lambda) and Databricks to build and optimize data migration pipelines. Leverage PySpark for large-scale data processing and transformation tasks. Continuous Learning: Stay updated on the latest industry trends, tools, and technologies related to Python, SQL, and Databricks. Share knowledge with the team and contribute to a culture of continuous improvement. SQL Database Management: Utilize expertise in SQL to design, optimize, and maintain relational databases. Write complex SQL queries for data retrieval, manipulation, and analysis. Qualifications Career Level - IC2 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Description The Lead Data Engineer will provide technical expertise in analysis, design, development, rollout and maintenance of data integration initiatives. This role will contribute to implementation methodologies and best practices, as well as work on project teams to analyze, design, develop and deploy business intelligence / data integration solutions to support a variety of customer needs. This position oversees a team of Data Integration Consultants at various levels, ensuring their success on projects, goals, trainings and initiatives though mentoring and coaching. Provides technical expertise in needs identification, data modelling, data movement and transformation mapping (source to target), automation and testing strategies, translating business needs into technical solutions with adherence to established data guidelines and approaches from a business unit or project perspective whilst leveraging best fit technologies (e.g., cloud, Hadoop, NoSQL, etc.) and approaches to address business and environmental challenges Works with stakeholders to identify and define self-service analytic solutions, dashboards, actionable enterprise business intelligence reports and business intelligence best practices. Responsible for repeatable, lean and maintainable enterprise BI design across organizations. Effectively partners with client team. Leadership not only in the conventional sense, but also within a team we expect people to be leaders. Candidate should elicit leadership qualities such as Innovation, Critical thinking, optimism/positivity, Communication, Time Management, Collaboration, Problem-solving, Acting Independently, Knowledge sharing and : Design, develop, test, and deploy data integration processes (batch or real-time) using tools such as Microsoft SSIS, Azure Data Factory, Databricks, Matillion, Airflow, Sqoop, etc. Create functional & technical documentation e.g. ETL architecture documentation, unit testing plans and results, data integration specifications, data testing plans, etc. Provide a consultative approach with business users, asking questions to understand the business need and deriving the data flow, conceptual, logical, and physical data models based on those needs. Perform data analysis to validate data models and to confirm ability to meet business needs. May serve as project or DI lead, overseeing multiple consultants from Stays current with emerging and changing technologies to best recommend and implement beneficial technologies and approaches for Data Integration Ensures proper execution/creation of methodology, training, templates, resource plans and engagement review processes Coach team members to ensure understanding on projects and tasks, providing effective feedback (critical and positive) and promoting growth opportunities when appropriate. Coordinate and consult with the project manager, client business staff, client technical staff and project developers in data architecture best practices and anything else that is data related at the project or business unit levels Architect,design, develop and set direction for enterprise self-service analytic solutions, business intelligence reports, visualisations and best practice standards. Toolsets include but not limited to : SQL Server Analysis and Reporting Services, Microsoft Power BI, Tableau and Qlik. Work with report team to identify, design and implement a reporting user experience that is consistent and intuitive across environments, across report methods, defines security and meets usability and scalability have : Writing code in programming language & working experience in Python, Pyspark, Databricks, Scala or Similar Data Pipeline Development & Management Design, develop, and maintain ETL (Extract, Transform, Load) pipelines using AWS services like AWS Glue, AWS Data Pipeline, Lambda, and Step Functions. Implement incremental data processing using tools like Apache Spark (EMR), Kinesis, and Kafka. Work with AWS data storage solutions such as Amazon S3, Redshift, RDS, DynamoDB, and Aurora. Optimize data partitioning, compression, and indexing for efficient querying and cost optimization. Implement data lake architecture using AWS Lake Formation & Glue Catalog. Implement CI/CD pipelines for data workflows using Code Pipeline, Code Build, and GitHub to have : Enterprise Data Modelling and Semantic Modelling & working experience in ERwin, ER/Studio, PowerDesigner or Similar Logical/Physical model on Big Data sets or modern data warehouse & working experience in ERwin, ER/Studio, PowerDesigner or Similar Agile Process (Scrum cadences, Roles, deliverables) & basic understanding in either Azure DevOps, JIRA or Similar. (ref:hirist.tech) Show more Show less
Posted 2 weeks ago
3.0 years
0 Lacs
India
On-site
Job Title: BI Engineer – Amazon QuickSight Developer Location: Ahmedabad - On-site Job Summary We are seeking an experienced Amazon QuickSight Developer to join our BI team. This role requires deep expertise in designing and deploying intuitive, high-impact dashboards and managing all aspects of QuickSight administration. You’ll collaborate closely with data engineers and business stakeholders to create scalable BI solutions that empower data-driven decisions across the organization. Key Responsibilities Dashboard Development & Visualization Design, develop, and maintain interactive QuickSight dashboards using advanced visuals, parameters, and controls. Create reusable datasets and calculated fields using both SPICE and Direct Query modes. Implement advanced analytics such as level-aware calculations, ranking, period-over-period comparisons, and custom KPIs. Build dynamic, user-driven dashboards with multi-select filters, dropdowns, and custom date ranges. Optimize performance and usability to maximize business value and user engagement. QuickSight Administration Manage users, groups, and permissions through QuickSight and AWS IAM roles. Implement and maintain row-level security (RLS) to ensure appropriate data access. Monitor usage, SPICE capacity, and subscription resources to maintain system performance. Configure and maintain themes, namespaces, and user interfaces for consistent experiences. Work with IT/cloud teams on account-level settings and AWS integrations. Collaboration & Data Integration Partner with data engineers and analysts to understand data structures and business needs. Integrate QuickSight with AWS services such as Redshift, Athena, S3, and Glue. Ensure data quality and accuracy through robust data modeling and SQL optimization. Required Skills & Qualifications 3+ years of hands-on experience with Amazon QuickSight (development and administration). Strong SQL skills and experience working with large, complex datasets. Expert-level understanding of QuickSight security, RLS, SPICE management, and user/group administration. Strong sense of data visualization best practices and UX design principles. Proficiency with AWS data services including Redshift, Athena, S3, Glue, and IAM. Solid understanding of data modeling and business reporting frameworks. Nice to Have: Experience with Python, AWS Lambda, or automating QuickSight administration via SDK or CLI. Familiarity with modern data stack tools (e.g., dbt, Snowflake, Tableau, Power BI). Apply Now If you’re passionate about building scalable BI solutions and making data come alive through visualization, we’d love to hear from you! Show more Show less
Posted 2 weeks ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Overview Overview Annalect is currently seeking a Senior Data Engineer to join our Technology team. In this role you will build Annalect products which sit atop cloud-based data infrastructure. We are looking for people who have a shared passion for technology, design & development, data, and fusing these disciplines together to build cool things. In this role, you will work on one or more software and data products in the Annalect Engineering Team. You will participate in technical architecture, design and development of software products as well as research and evaluation of new technical solutions Responsibilities Designing, building, testing, and deploying data transfers across various cloud environments (Azure, GCP, AWS, Snowflake, etc). Developing data pipelines, monitoring, maintaining, and tuning. Write at-scale data transformations in SQL and Python. Perform code reviews and provide leadership and guidance to junior developers. Qualifications Curiosity in learning the business requirements that are driving the engineering requirements. Interest in new technologies and eagerness to bring those technologies and out of the box ideas to the team. 3+ years of SQL experience. 3+ years of professional Python experience. 3+ years of professional Linux experience. Preferred familiarity with Snowflake, AWS, GCP, Azure cloud environments. Intellectual curiosity and drive; self-starters will thrive in this position. Passion for Technology: Excitement for new technology, bleeding edge applications, and a positive attitude towards solving real world challenges. Additional Skills BS BS, MS or PhD in Computer Science, Engineering, or equivalent real-world experience. Experience with big data and/or infrastructure. Bonus for having experience in setting up Petabytes of data so they can be easily accessed. Understanding of data organization, ie partitioning, clustering, file sizes, file formats. Experience working with classical relational databases (Postgres, Mysql, MSSQL). Experience with Hadoop, Hive, Spark, Redshift, or other data processing tools (Lots of time will be spent building and optimizing transformations) Proven ability to independently execute projects from concept to implementation to launch and to maintain a live product. Perks of working at Annalect We have an incredibly fun, collaborative, and friendly environment, and often host social and learning activities such as game night, speaker series, and so much more! Halloween is a special day on our calendar since it is our Founding Day - we go all out with decorations, costumes, and prizes! Generous vacation policy. Paid time off (PTO) includes vacation days, personal days, and a Summer Friday program. Extended time off around the holiday season. Our office is closed between Xmas and New Year to encourage our hardworking employees to rest, recharge and celebrate the season with family and friends. As part of Omnicom, we have the backing and resources of a global billion-dollar company, but also have the flexibility and pace of a “startup” - we move fast, break things, and innovate. Work with modern stack and environment to keep on learning and improving helping to experiment and shape latest technologies Show more Show less
Posted 2 weeks ago
5.0 - 8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description The Role The Data Engineer is accountable for developing high quality data products to support the Bank’s regulatory requirements and data driven decision making. A Data Engineer will serve as an example to other team members, work closely with customers, and remove or escalate roadblocks. By applying their knowledge of data architecture standards, data warehousing, data structures, and business intelligence they will contribute to business outcomes on an agile team. Responsibilities Developing and supporting scalable, extensible, and highly available data solutions Deliver on critical business priorities while ensuring alignment with the wider architectural vision Identify and help address potential risks in the data supply chain Follow and contribute to technical standards Design and develop analytical data models Required Qualifications & Work Experience First Class Degree in Engineering/Technology/MCA 5 to 8 years’ experience implementing data-intensive solutions using agile methodologies Experience of relational databases and using SQL for data querying, transformation and manipulation Experience of modelling data for analytical consumers Ability to automate and streamline the build, test and deployment of data pipelines Experience in cloud native technologies and patterns A passion for learning new technologies, and a desire for personal growth, through self-study, formal classes, or on-the-job training Excellent communication and problem-solving skills T echnical Skills (Must Have) ETL: Hands on experience of building data pipelines. Proficiency in two or more data integration platforms such as Ab Initio, Apache Spark, Talend and Informatica Big Data: Experience of ‘big data’ platforms such as Hadoop, Hive or Snowflake for data storage and processing Data Warehousing & Database Management: Understanding of Data Warehousing concepts, Relational (Oracle, MSSQL, MySQL) and NoSQL (MongoDB, DynamoDB) database design Data Modeling & Design: Good exposure to data modeling techniques; design, optimization and maintenance of data models and data structures Languages: Proficient in one or more programming languages commonly used in data engineering such as Python, Java or Scala DevOps: Exposure to concepts and enablers - CI/CD platforms, version control, automated quality control management Technical Skills (Valuable) Ab Initio: Experience developing Co>Op graphs; ability to tune for performance. Demonstrable knowledge across full suite of Ab Initio toolsets e.g., GDE, Express>IT, Data Profiler and Conduct>IT, Control>Center, Continuous>Flows Cloud: Good exposure to public cloud data platforms such as S3, Snowflake, Redshift, Databricks, BigQuery, etc. Demonstratable understanding of underlying architectures and trade-offs Data Quality & Controls: Exposure to data validation, cleansing, enrichment and data controls Containerization: Fair understanding of containerization platforms like Docker, Kubernetes File Formats: Exposure in working on Event/File/Table Formats such as Avro, Parquet, Protobuf, Iceberg, Delta Others: Basics of Job scheduler like Autosys. Basics of Entitlement management Certification on any of the above topics would be an advantage. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Digital Software Engineering ------------------------------------------------------ Time Type: ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less
Posted 2 weeks ago
3.0 - 4.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
The Role The Data Engineer is accountable for developing high quality data products to support the Bank’s regulatory requirements and data driven decision making. A Data Engineer will serve as an example to other team members, work closely with customers, and remove or escalate roadblocks. By applying their knowledge of data architecture standards, data warehousing, data structures, and business intelligence they will contribute to business outcomes on an agile team. Responsibilities Developing and supporting scalable, extensible, and highly available data solutions Deliver on critical business priorities while ensuring alignment with the wider architectural vision Identify and help address potential risks in the data supply chain Follow and contribute to technical standards Design and develop analytical data models Required Qualifications & Work Experience First Class Degree in Engineering/Technology/MCA 3 to 4 years’ experience implementing data-intensive solutions using agile methodologies Experience of relational databases and using SQL for data querying, transformation and manipulation Experience of modelling data for analytical consumers Ability to automate and streamline the build, test and deployment of data pipelines Experience in cloud native technologies and patterns A passion for learning new technologies, and a desire for personal growth, through self-study, formal classes, or on-the-job training Excellent communication and problem-solving skills T echnical Skills (Must Have) ETL: Hands on experience of building data pipelines. Proficiency in at least one of the data integration platforms such as Ab Initio, Apache Spark, Talend and Informatica Big Data: Exposure to ‘big data’ platforms such as Hadoop, Hive or Snowflake for data storage and processing Data Warehousing & Database Management: Understanding of Data Warehousing concepts, Relational (Oracle, MSSQL, MySQL) and NoSQL (MongoDB, DynamoDB) database design Data Modeling & Design: Good exposure to data modeling techniques; design, optimization and maintenance of data models and data structures Languages: Proficient in one or more programming languages commonly used in data engineering such as Python, Java or Scala DevOps: Exposure to concepts and enablers - CI/CD platforms, version control, automated quality control management Technical Skills (Valuable) Ab Initio: Experience developing Co>Op graphs; ability to tune for performance. Demonstrable knowledge across full suite of Ab Initio toolsets e.g., GDE, Express>IT, Data Profiler and Conduct>IT, Control>Center, Continuous>Flows Cloud: Good exposure to public cloud data platforms such as S3, Snowflake, Redshift, Databricks, BigQuery, etc. Demonstratable understanding of underlying architectures and trade-offs Data Quality & Controls: Exposure to data validation, cleansing, enrichment and data controls Containerization: Fair understanding of containerization platforms like Docker, Kubernetes File Formats: Exposure in working on Event/File/Table Formats such as Avro, Parquet, Protobuf, Iceberg, Delta Others: Basics of Job scheduler like Autosys. Basics of Entitlement management Certification on any of the above topics would be an advantage. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Digital Software Engineering ------------------------------------------------------ Time Type: ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
Indore, Madhya Pradesh, India
On-site
This role will be working out of our Indore office 5 days a week Vena is looking for a Manager, SaaS Engineering to join our SaaS Technology and Operations (STO) team. This role is a match for you if you love leading and mentoring a team of Site Reliability Developers building highly scalable, resilient and automated services. Our team is dedicated to delivering outstanding customer experience through top-notch automation and orchestration practices for Vena's SaaS platform. As a technical manager, your role will involve utilizing your expertise in software and systems engineering to develop and manage large-scale, fault-tolerant systems and services. We seek individuals who are committed to making a difference and flourish in a flexible work environment focused on achieving business objectives. Your role is to ensure that our systems - both internally and externally facing-have been designed with maximizing resiliency and uptime. Our team focuses on optimizing existing systems, building infrastructure and reducing toil through automation. Practices such as limiting time spent on manual operational work, post-mortems and proactive identification of potential outages factor into iterative improvement that is key to both product quality and technical standards. What You Will Do Working directly with the Sr. Director of SaaS engineering, you will play a pivotal role in supporting and leading your team of Site Reliability Developers. Provide technical leadership and oversight of both planning and deployment functions, ensuring sustainability and predictability in the delivery of technical solutions implemented by your team. Mentor and nurture the Site Reliability Developers that report to you with a focus on their career growth. Stay in tune with the evolution of the relevant technology landscapes; constantly evaluate Vena’s architecture for opportunities to leverage new capabilities to modernize. Identify technical and/or process improvement opportunities based on best practices and industry standards and advocate to get them implemented within Vena's technology organization. All members of the STO organization are expected to demonstrate technical capabilities and at times directly participate in implementation or proof of concept work. Maintain issue tracking (Jira) and documentation (Confluence), providing reporting that ensures proper tracking and visibility of your team's current state. Resolve conflicts by demonstrating leadership and appropriate decision-making competencies. Other duties as assigned What We Use Please note this reflects only a portion of our current technical stack, and we are constantly evolving and revisiting our stack as we grow: A modern AWS cloud infrastructure managed through infrastructure-as-code (Terraform), configuration-as-code (Ansible), and CI/CD (Jenkins) RDS MySQL, Redshift, Redshift Spectrum, MongoDB, and Elasticsearch, Kinesis, SQS, and RabbitMQ DevOps tools written in Python Back-end applications written using Java, Dropwizard, Spring Boot, and Hibernate Front-end applications written using TypeScript, JavaScript, React (Context Api and Hooks), and Redux Monitoring with Datadog, and CloudWatch Does this sound like you? 5+ years of technical experience in an IT Operational, DevOps, Site Reliability Engineer, or Software Engineering role. 2-3+ years of experience in a leadership role (Team Lead or Manager). Strong cross-functional collaboration skills, relationship-building skills, and ability to achieve results without direct reporting relationships. You take personal responsibility and hold yourself accountable for providing exceptional work, both individually and as part of a team. You have a strong knowledge of cloud computing platforms (AWS and Azure) and experience in setting up and managing cloud infrastructure using various IaC and orchestration tools. You can write code - in any language. You have implemented your work in a production environment and can back it up with examples. Demonstrated experience executing on a technology roadmap directly supporting the larger software organization platform. Strong sense of personal responsibility and accountability for delivering high-quality work, both personally and at a team level. Experience with the operational aspects of software systems using telemetry, centralized logging, and alerting with tools such as: CloudWatch, Datadog, Prometheus, etc. Our salaries are tailored to roles, levels and locations. Your individual pay within this range is influenced by factors like work location, skills, experience and education. As you progress in your role, your compensation may adapt, offering flexibility for growth beyond initial levels. For specifics, your recruiter will provide details and address any questions during the hiring process. Show more Show less
Posted 2 weeks ago
4.0 - 5.0 years
0 Lacs
Bengaluru East, Karnataka, India
On-site
Scripbox is India’s largest and best-established digital wealth management service that helps its customers create wealth for their long-and-short-term goals. Founded in 2012, Scripbox is a pioneer in the digital financial services category and is recognised for creating simple and elegant user experiences in a complex domain. We do this by simplifying complex investing concepts and automating best practices, so our customers can grow their wealth without worry. We achieve this by combining cutting-edge technology, data-driven algorithms, awesome UX and friendly customer support. Our task is ambitious and we like to work hard as well as smart. We want to build a team that relishes challenges and contributes to a new way of thinking and investing in India. We are also invested in the growth of our colleagues and providing a supportive and thriving working environment for everyone. We have been recognised by Great Place To Work® as one of India’s best companies to work for. We are looking for creators who can build products that our customers love. The challenge for you will involve understanding, and building for, an unforgiving consumer who invests a lot of trust into the product YOU will build. Your product will be used by thousands. Scripbox is making a difference in the world of personal finance and investing and we would like you to be part of the team that makes it happen. Responsibilities: Job Description Develop high-quality code using established language best practices. Collaborate closely within a team environment. Utilize the latest tools and techniques to build robust software. Actively participate in design reviews, code development, code reviews, and unit testing. Take ownership of the quality and usability of your code contributions. Requirements: 4-5 years of experience building good quality production software Excellent knowledge of at least one ecosystem based on Ruby, Elixir, Java or Python Proficiency in object-oriented programming, including a solid understanding of design patterns Experience with functional programming is preferred but not required Familiar with datastores like MySQL, PostgreSQL, Redis, Redshift etc. Familiarity with react.js/react-native, vue.js, bootstrap etc. Knowledge of deploying software to AWS, GCP, Azure Knowledge of software best practices, like Test-Driven Development (TDD) and Continuous Integration (CI) We Value: Entrepreneurial spirit. Everywhere you go, you can’t help but mobilize people, build things, solve problems, roll up your sleeves, go above and beyond, raise the bar. You are an insatiable doer and driver Strong execution and organization. Your team will be working with engineers and product leads at the bleeding edge of the development cycle. To be successful in this role, you should be comfortable executing with little oversight and be able to adapt to problems quickly Strategic mindset - you’re comfortable thinking a few steps ahead of where the team is at now What You’ll Get: Very competitive salary with performance bonus Active promotion of your professional career by sending you to events, hackathons, user groups etc. Weekly time-slot where you are encouraged to spend time to play around with new technology or self-learning Skills RubyonRails Elixir Backend ruby Skills rubyonrails elixir backend Show more Show less
Posted 2 weeks ago
8.0 years
0 Lacs
India
On-site
About Us Udacity is on a mission of forging futures in tech through radical talent transformation in digital technologies. We offer a unique and immersive online learning platform, powering corporate technical training in fields such as Artificial Intelligence, Machine Learning, Data Science, Autonomous Systems, Cloud Computing and more. Our rapidly growing global organization is revolutionizing how the enterprise market bridges the talent shortage and skills gaps during their digital transformation journey. At Udacity, the Analytics Team is deploying data to inform and empower the company with insight, to drive student success and business value. We are looking for a Principal Data Analyst to help advance that vision as part of our business analytics group. You will work with stakeholders to help inform their current initiatives and long term roadmap with data. You will be a key part of a dynamic data team that works daily with strategic partners to deliver data, prioritize resources and scale our impact. This is a chance to affect thousands of students around the world who come to Udacity to improve their lives, and your success as part of a world-class analytics organization will be visible up to the highest levels of the company. Your Responsibilities You will report to the Director of Data and lead high-impact analyses of Udacity’s curriculum and learner behavior to optimize content strategy, ensure skills alignment with industry needs, and drive measurable outcomes for learners and enterprise clients Lead the development of a strategic analytics roadmap for Udacity’s content organization, aligning insights with learning, product, and business goals. Partner with senior stakeholders to define and monitor KPIs that measure the health, efficacy, and ROI of our curriculum across both B2C and enterprise portfolios Centralize and synthesize learner feedback, CX signals, and performance data to identify content pain points and inform roadmap prioritization. Develop scalable methods to assess content effectiveness by integrating learner outcomes, usage behavior, and engagement metrics. Contribute to building AI-powered systems that classify learner feedback, learning styles, and success predictors. Act as a thought partner to leaders across Content and Product by communicating insights clearly and influencing strategic decisions. Lead cross-functional analytics initiatives and mentor peers and junior analysts to elevate data maturity across the organization. Requirements 8+ years experience in analytics or data science roles with a focus on product/content insights, ideally in edtech or SaaS. Advanced SQL and experience with data warehouses (Athena, Presto, Redshift, etc.). Strong proficiency in Python for data analysis, machine learning, and automation. Experience with dashboards and visualization tools (e.g., Tableau, PowerBI, or similar). Strong knowledge of experimentation, A/B testing, and causal inference frameworks. Proven ability to lead high-impact analytics projects independently and influence stakeholders Excellent communication skills—able to translate technical insights into business recommendations Preferred Experience Familiarity with Tableau, Amplitude, dbt, Airflow, or similar tools Experience working with large-scale sequential or clickstream data Exposure to NLP, embeddings, or GPT-based analysis for feedback classification Understanding of learning science or instructional design principles Benefits Experience a rewarding work environment with Udacity's perks and benefits! At Udacity, we offer you the flexibility of working from home. We also have in-person collaboration spaces in Mountain View, Cairo, Dubai and Noida and continue to build opportunities for team members to connect in person Flexible working hours Paid time off Comprehensive medical insurance coverage for you and your dependents Employee wellness resources and initiatives (access to wellness platforms like Headspace) Quarterly wellness day off Personalized career development Unlimited access to Udacity Nanodegrees What We Do Forging futures in tech is our vision. Udacity is where lifelong learners come to learn the skills they need, to land the jobs they want, and to build the lives they deserve. Don’t stop there! Please keep reading... You’ve probably heard the following statistic: Most male applicants only meet 60% of the qualifications, while women and other marginalized candidates only apply if they meet 100% of the qualifications. If you think you have what it takes but don’t meet every single point in the job description, please apply! We believe that historically, many processes disproportionately hurt the most marginalized communities in society- including people of color, working-class backgrounds, women and LGBTQ people. Centering these communities at our core is pivotal for any successful organization and a value we uphold steadfastly. Therefore, Udacity strongly encourages applications from all communities and backgrounds. Udacity is proud to be an Equal Employment Opportunity employer. Please read our blog post for “6 Reasons Why Diversity, Equity, and Inclusion in the Workplace Exists” Last, but certainly not least… Udacity is committed to creating economic empowerment and a more diverse and equitable world. We believe that the unique contributions of all Udacians is the driver of our success. To ensure that our products and culture continue to incorporate everyone’s perspectives and experience we never discriminate on the basis of race, color, religion, sex, gender, gender identity or expression, sexual orientation, marital status, national origin, ancestry, disability, medical condition (including genetic information), age, veteran status or military status, or any other basis protected by federal, state or local laws. As part of our ongoing work to build more diverse teams at Udacity, when applying, you will be asked to complete a voluntary self-identification survey. This survey is anonymous, we are unable to connect your application with your survey responses. Please complete this voluntary survey as we utilize the data for diversity measures in terms of gender and ethnic background in both our candidates and our Udacians. We consider this data seriously and appreciate your willingness to complete this step in the process, if you choose to do so. Udacity's Values Obsess over Outcomes - Take the Lead - Embrace Curiosity - Celebrate the Assist Udacity's Terms of Use and Privacy Policy Show more Show less
Posted 2 weeks ago
3.0 years
0 Lacs
Delhi, India
On-site
Experience: More than 3 years in data integration, pipeline development, and data warehousing, with a strong focus on AWS Databricks. Technical Skills: Proficiency in Databricks platform, management, and optimization. Strong experience in AWS Cloud, particularly in data engineering and administration, with expertise in Apache Spark, S3, Athena, Glue, Kafka, Lambda, Redshift, and RDS. Proven experience in data engineering performance tuning and analytical understanding in business and program contexts. Solid experience in Python development, specifically in pySpark within the AWS Cloud environment, including experience with Terraform. Knowledge of databases (Oracle, SQL Server, PostgreSQL, Redshift, MySQL, or similar) and advanced database querying. Experience with source control systems (Git, Bitbucket) and Jenkins for build and continuous integration. Understanding of continuous deployment (CI/CD) processes. Experience with Airflow and additional Apache Spark knowledge is advantageous. Exposure to ETL tools, including Informatica. Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Build the future of the AI Data Cloud. Join the Snowflake team. We are seeking a talented and motivated Analytics Engineer to join our team in Pune, India. This role will be pivotal in building and maintaining the data infrastructure that powers our cutting-edge AI applications, enabling us to deliver intelligent solutions to our customers and internal stakeholders. If you are passionate about data, AI, and working with a world-class cloud data platform, we want to hear from you. THE ROLE As an Analytics Engineer focused on AI applications, you will be responsible for designing, developing, and maintaining robust and scalable data pipelines that feed our machine learning models and AI-driven features. You will collaborate closely with data scientists, AI researchers, software engineers, and product managers to understand data requirements and deliver high-quality data solutions. Your work will directly impact the performance and reliability of our AI systems, contributing to Snowflake's innovation in the AI space. Job Description As an Analytics Engineer supporting AI Applications, you will: Data Pipeline Development & Maintenance: Design, build, and maintain scalable, reliable ETL/ELT pipelines in Snowflake to support AI model training, evaluation, and deployment. Integrate data from various sources, including internal systems, Salesforce, and other external vendor platforms. Develop a willingness to learn B2B concepts and the intricacies of diverse data sources. Implement data quality frameworks and ensure data integrity for AI applications. System Integration & Automation: Develop and automate data processes using SQL, Python, and other relevant technologies. Work with modern data stack tools and cloud-based data platforms, with a strong emphasis on Snowflake. MLOps Understanding & Support: Gain an understanding of MLOps principles and contribute to the operationalization of machine learning models. Support data versioning, model monitoring, and feedback loops for AI systems. Release Management & Collaboration: Participate actively in frequent release and testing cycles to ensure the high-quality delivery of data features and reduce risks in production AI systems. Develop and execute QA/test strategies for data pipelines and integrations, often coordinating with cross-functional teams. Gain experience with access control systems CI/CD pipelines, and release testing methodologies to ensure secure and efficient deployments. Performance Optimization & Scalability: Monitor and optimize the performance of data pipelines and queries. Ensure data solutions are scalable to handle growing data volumes and evolving AI application needs. What You Will Need Required Skills: Bachelor's or Master's degree in Computer Science, Engineering, or a related STEM (Science, Technology, Engineering, Mathematics) field. Strong proficiency in SQL for data manipulation, querying, and optimization. Proficiency in Python for data processing, automation, and scripting. Hands-on experience with Snowflake or other cloud-based data platforms (e.g., AWS Redshift, Google BigQuery, Azure Synapse). A proactive and collaborative mindset with a strong desire to learn new technologies and B2B concepts. Preferred Skills: Experience in building and maintaining ETL/ELT pipelines for AI/ML use cases. Understanding of MLOps principles and tools. Experience with data quality frameworks and tools. Familiarity with data modeling techniques. Experience with workflow orchestration tools (e.g., Airflow, Dagster). Knowledge of software engineering best practices, including version control (e.g., Git), CI/CD, and testing. Experience coordinating QA/test strategies for cross-team integration. Familiarity with access control systems (e.g., Okta) and release testing. Excellent problem-solving and analytical skills. Strong communication and interpersonal skills. Snowflake is growing fast, and we’re scaling our team to help enable and accelerate our growth. We are looking for people who share our values, challenge ordinary thinking, and push the pace of innovation while building a future for themselves and Snowflake. How do you want to make your impact? For jobs located in the United States, please visit the job posting on the Snowflake Careers Site for salary and benefits information: careers.snowflake.com Show more Show less
Posted 2 weeks ago
5.0 - 8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Summary We are seeking a skilled Developer with 5 to 8 years of experience to join our team. The ideal candidate will have expertise in Amazon S3 Amazon Redshift Python Databricks SQL Databricks Delta Lake Databricks Workflows and PySpark. Experience in Property & Casualty Insurance is a plus. This is a hybrid role with day shifts and no travel required. Responsibilities Develop and maintain data pipelines using Amazon S3 and Amazon Redshift to ensure efficient data storage and retrieval. Utilize Python to write clean scalable code for data processing and analysis tasks. Implement Databricks SQL for querying and analyzing large datasets to support business decisions. Manage and optimize Databricks Delta Lake for reliable and high-performance data storage. Design and execute Databricks Workflows to automate data processing tasks and improve operational efficiency. Leverage PySpark to perform distributed data processing and enhance data transformation capabilities. Collaborate with cross-functional teams to understand data requirements and deliver solutions that meet business needs. Ensure data quality and integrity by implementing robust validation and monitoring processes. Provide technical support and troubleshooting for data-related issues to maintain smooth operations. Stay updated with the latest industry trends and technologies to continuously improve data solutions. Contribute to the development of best practices and standards for data engineering within the team. Document technical specifications and processes to ensure knowledge sharing and continuity. Participate in code reviews and provide constructive feedback to peers for continuous improvement. Qualifications Possess strong expertise in Amazon S3 and Amazon Redshift for data storage and management. Demonstrate proficiency in Python for developing scalable data processing solutions. Have hands-on experience with Databricks SQL for data querying and analysis. Show capability in managing Databricks Delta Lake for high-performance data storage. Exhibit skills in designing Databricks Workflows for automating data processes. Utilize PySpark for distributed data processing and transformation tasks. Experience in Property & Casualty Insurance domain is a plus. Strong problem-solving skills and ability to troubleshoot data-related issues. Excellent communication and collaboration skills to work effectively with cross-functional teams. Ability to stay updated with the latest industry trends and technologies. Strong documentation skills for maintaining technical specifications and processes. Experience in participating in code reviews and providing constructive feedback. Commitment to maintaining data quality and integrity through robust validation processes. Certifications Required AWS Certified Solutions Architect Databricks Certified Data Engineer Associate Python Certification Show more Show less
Posted 2 weeks ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Description Are you excited about the digital media revolution and passionate about designing and delivering advanced analytics that directly influence the product decisions of Amazon's digital businesses. Do you see yourself as a champion of innovating on behalf of the customer by turning data insights into action? The Amazon Digital Acceleration (DA) Analytics team is looking for an analytical and technically skilled individual to join our team. In this role, you will play a critical part in developing foundational data instrumentation components to seamlessly surface relevant digital content to Amazon customers. An ideal individual is someone who has deep data engineering skills around ETL, data modeling, database architecture and big data solutions. You should have strong business judgement, excellent written and verbal communication skills. Basic Qualifications 3+ years of data engineering experience Experience with data modeling, warehousing and building ETL pipelines Experience with SQL Preferred Qualifications Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases) Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI MAA 15 SEZ Job ID: A2844445 Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Gurgaon, Haryana, India
On-site
dunnhumby is the global leader in Customer Data Science, empowering businesses everywhere to compete and thrive in the modern data-driven economy. We always put the Customer First. Our mission: to enable businesses to grow and reimagine themselves by becoming advocates and champions for their Customers. With deep heritage and expertise in retail – one of the world’s most competitive markets, with a deluge of multi-dimensional data – dunnhumby today enables businesses all over the world, across industries, to be Customer First. dunnhumby employs nearly 2,500 experts in offices throughout Europe, Asia, Africa, and the Americas working for transformative, iconic brands such as Tesco, Coca-Cola, Meijer, Procter & Gamble and Metro. We are seeking a talented Engineering Manager with ML Ops expertise to lead a team of engineers in developing product that help Retailers transform their Retail Media business in a way that helps them achieve maximum ad revenue and enable massive scale. As an Engineering Manager, you will play a pivotal role in designing and delivering high-quality software solutions. You will be responsible for leading a team, mentoring engineers, contributing to system architecture, and ensuring adherence to engineering best practices. Your technical expertise, leadership skills, and ability to drive results will be key to the success of our products. What you will be doing? You will lead the charge in ensuring operational efficiency and delivering high-value solutions . You’ll mentor and develop a high-performing team of Big Data and MLOps engineers, driving best practices in software development, data management, and model deployment. With a focus on robust technical design, you’ll ensure solutions are secure, scalable, and efficient. Your role will involve hands-on development to tackle complex challenges, collaborating across teams to define requirements, and delivering innovative solutions. You’ll keep stakeholders and senior management informed on progress, risks, and opportunities while staying ahead of advancements in AI/ML technologies and driving their application. With an agile mindset, you will overcome challenges and deliver impactful solutions that make a difference. Technical Expertise Proven experience in microservices architecture, with hands-on knowledge of Docker and Kubernetes for orchestration. Proficiency in ML Ops and Machine Learning workflows using tools like Spark. Strong command of SQL and PySpark programming. Expertise in Big Data solutions such as Spark and Hive, with advanced Spark optimizations and tuning skills. Hands-on experience with Big Data orchestrators like Airflow. Proficiency in Python programming, particularly with frameworks like FastAPI or equivalent API development tools. Experience in unit testing, code quality assurance, and the use of Git or other version control systems. Cloud And Infrastructure Practical knowledge of cloud-based data stores, such as Redshift and BigQuery (preferred). Experience in cloud solution architecture, especially with GCP and Azure. Familiarity with GitLab CI/CD pipelines is a bonus. Monitoring And Scalability Solid understanding of logging, monitoring, and alerting systems for production-level big data pipelines. Prior experience with scalable architectures and distributed processing frameworks. Soft Skills And Additional Plus Points A collaborative approach to working within cross-functional teams. Ability to troubleshoot complex systems and provide innovative solutions. Familiarity with GitLab for CI/CD and infrastructure automation tools is an added advantage. What You Can Expect From Us We won’t just meet your expectations. We’ll defy them. So you’ll enjoy the comprehensive rewards package you’d expect from a leading technology company. But also, a degree of personal flexibility you might not expect. Plus, thoughtful perks, like flexible working hours and your birthday off. You’ll also benefit from an investment in cutting-edge technology that reflects our global ambition. But with a nimble, small-business feel that gives you the freedom to play, experiment and learn. And we don’t just talk about diversity and inclusion. We live it every day – with thriving networks including dh Gender Equality Network, dh Proud, dh Family, dh One and dh Thrive as the living proof. We want everyone to have the opportunity to shine and perform at your best throughout our recruitment process. Please let us know how we can make this process work best for you. For an informal and confidential chat please contact stephanie.winson@dunnhumby.com to discuss how we can meet your needs. Our approach to Flexible Working At dunnhumby, we value and respect difference and are committed to building an inclusive culture by creating an environment where you can balance a successful career with your commitments and interests outside of work. We believe that you will do your best at work if you have a work / life balance. Some roles lend themselves to flexible options more than others, so if this is important to you please raise this with your recruiter, as we are open to discussing agile working opportunities during the hiring process. For further information about how we collect and use your personal information please see our Privacy Notice which can be found (here) Show more Show less
Posted 2 weeks ago
5.0 - 8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
The Role The Data Engineer is accountable for developing high quality data products to support the Bank’s regulatory requirements and data driven decision making. A Data Engineer will serve as an example to other team members, work closely with customers, and remove or escalate roadblocks. By applying their knowledge of data architecture standards, data warehousing, data structures, and business intelligence they will contribute to business outcomes on an agile team. Responsibilities Developing and supporting scalable, extensible, and highly available data solutions Deliver on critical business priorities while ensuring alignment with the wider architectural vision Identify and help address potential risks in the data supply chain Follow and contribute to technical standards Design and develop analytical data models Required Qualifications & Work Experience First Class Degree in Engineering/Technology/MCA 5 to 8 years’ experience implementing data-intensive solutions using agile methodologies Experience of relational databases and using SQL for data querying, transformation and manipulation Experience of modelling data for analytical consumers Ability to automate and streamline the build, test and deployment of data pipelines Experience in cloud native technologies and patterns A passion for learning new technologies, and a desire for personal growth, through self-study, formal classes, or on-the-job training Excellent communication and problem-solving skills T echnical Skills (Must Have) ETL: Hands on experience of building data pipelines. Proficiency in two or more data integration platforms such as Ab Initio, Apache Spark, Talend and Informatica Big Data: Experience of ‘big data’ platforms such as Hadoop, Hive or Snowflake for data storage and processing Data Warehousing & Database Management: Understanding of Data Warehousing concepts, Relational (Oracle, MSSQL, MySQL) and NoSQL (MongoDB, DynamoDB) database design Data Modeling & Design: Good exposure to data modeling techniques; design, optimization and maintenance of data models and data structures Languages: Proficient in one or more programming languages commonly used in data engineering such as Python, Java or Scala DevOps: Exposure to concepts and enablers - CI/CD platforms, version control, automated quality control management Technical Skills (Valuable) Ab Initio: Experience developing Co>Op graphs; ability to tune for performance. Demonstrable knowledge across full suite of Ab Initio toolsets e.g., GDE, Express>IT, Data Profiler and Conduct>IT, Control>Center, Continuous>Flows Cloud: Good exposure to public cloud data platforms such as S3, Snowflake, Redshift, Databricks, BigQuery, etc. Demonstratable understanding of underlying architectures and trade-offs Data Quality & Controls: Exposure to data validation, cleansing, enrichment and data controls Containerization: Fair understanding of containerization platforms like Docker, Kubernetes File Formats: Exposure in working on Event/File/Table Formats such as Avro, Parquet, Protobuf, Iceberg, Delta Others: Basics of Job scheduler like Autosys. Basics of Entitlement management Certification on any of the above topics would be an advantage. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Digital Software Engineering ------------------------------------------------------ Time Type: ------------------------------------------------------ Citi is an equal opportunity and affirmative action employer. Qualified applicants will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran. Citigroup Inc. and its subsidiaries ("Citi”) invite all qualified interested applicants to apply for career opportunities. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi . View the " EEO is the Law " poster. View the EEO is the Law Supplement . View the EEO Policy Statement . View the Pay Transparency Posting Show more Show less
Posted 2 weeks ago
3.0 - 4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
The Role The Data Engineer is accountable for developing high quality data products to support the Bank’s regulatory requirements and data driven decision making. A Data Engineer will serve as an example to other team members, work closely with customers, and remove or escalate roadblocks. By applying their knowledge of data architecture standards, data warehousing, data structures, and business intelligence they will contribute to business outcomes on an agile team. Responsibilities Developing and supporting scalable, extensible, and highly available data solutions Deliver on critical business priorities while ensuring alignment with the wider architectural vision Identify and help address potential risks in the data supply chain Follow and contribute to technical standards Design and develop analytical data models Required Qualifications & Work Experience First Class Degree in Engineering/Technology/MCA 3 to 4 years’ experience implementing data-intensive solutions using agile methodologies Experience of relational databases and using SQL for data querying, transformation and manipulation Experience of modelling data for analytical consumers Ability to automate and streamline the build, test and deployment of data pipelines Experience in cloud native technologies and patterns A passion for learning new technologies, and a desire for personal growth, through self-study, formal classes, or on-the-job training Excellent communication and problem-solving skills T echnical Skills (Must Have) ETL: Hands on experience of building data pipelines. Proficiency in at least one of the data integration platforms such as Ab Initio, Apache Spark, Talend and Informatica Big Data: Exposure to ‘big data’ platforms such as Hadoop, Hive or Snowflake for data storage and processing Data Warehousing & Database Management: Understanding of Data Warehousing concepts, Relational (Oracle, MSSQL, MySQL) and NoSQL (MongoDB, DynamoDB) database design Data Modeling & Design: Good exposure to data modeling techniques; design, optimization and maintenance of data models and data structures Languages: Proficient in one or more programming languages commonly used in data engineering such as Python, Java or Scala DevOps: Exposure to concepts and enablers - CI/CD platforms, version control, automated quality control management Technical Skills (Valuable) Ab Initio: Experience developing Co>Op graphs; ability to tune for performance. Demonstrable knowledge across full suite of Ab Initio toolsets e.g., GDE, Express>IT, Data Profiler and Conduct>IT, Control>Center, Continuous>Flows Cloud: Good exposure to public cloud data platforms such as S3, Snowflake, Redshift, Databricks, BigQuery, etc. Demonstratable understanding of underlying architectures and trade-offs Data Quality & Controls: Exposure to data validation, cleansing, enrichment and data controls Containerization: Fair understanding of containerization platforms like Docker, Kubernetes File Formats: Exposure in working on Event/File/Table Formats such as Avro, Parquet, Protobuf, Iceberg, Delta Others: Basics of Job scheduler like Autosys. Basics of Entitlement management Certification on any of the above topics would be an advantage. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Digital Software Engineering ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Citi is an equal opportunity and affirmative action employer. Qualified applicants will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran. Citigroup Inc. and its subsidiaries ("Citi”) invite all qualified interested applicants to apply for career opportunities. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi . View the " EEO is the Law " poster. View the EEO is the Law Supplement . View the EEO Policy Statement . View the Pay Transparency Posting Show more Show less
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The job market for redshift professionals in India is growing rapidly as more companies adopt cloud data warehousing solutions. Redshift, a powerful data warehouse service provided by Amazon Web Services, is in high demand due to its scalability, performance, and cost-effectiveness. Job seekers with expertise in redshift can find a plethora of opportunities in various industries across the country.
The average salary range for redshift professionals in India varies based on experience and location. Entry-level positions can expect a salary in the range of INR 6-10 lakhs per annum, while experienced professionals can earn upwards of INR 20 lakhs per annum.
In the field of redshift, a typical career path may include roles such as: - Junior Developer - Data Engineer - Senior Data Engineer - Tech Lead - Data Architect
Apart from expertise in redshift, proficiency in the following skills can be beneficial: - SQL - ETL Tools - Data Modeling - Cloud Computing (AWS) - Python/R Programming
As the demand for redshift professionals continues to rise in India, job seekers should focus on honing their skills and knowledge in this area to stay competitive in the job market. By preparing thoroughly and showcasing their expertise, candidates can secure rewarding opportunities in this fast-growing field. Good luck with your job search!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.