Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About CASHe CASHe is one of Indias leading AI-driven credit-led financial wellness platform that offers a full spectrum of consumer finance, insurance and investment products designed to help customers avail satchetised financial services and products at scale. Since 2016, CASHe has been empowering the underserved sections of society by providing them easy and affordable access to credit that meets their unmet financial needs. It offers hassle-free loans to help people take control of their personal finances regardless of their credit score by utilizing its proprietary AI/ML-backed credit decisioning model called the Social Loan Quotient (SLQ). Headquartered in Mumbai, CASHe has disbursed loans worth over Rs 12,000cr. to over 5 lakh customers. Starting as a personal loan app that has been downloaded over 22 million times, CASHe is now a full-fledged financial services platform that offers its customers bespoke credit, investments and insurance products. CASHe, through its in-house wealth-tech arm, has also expanded significantly in the wealth management space and is looking to make major strides in the HNI Investments space via an array of customized and exclusive offerings. Job Description Required Skills and Experience : Bachelor's or graduate degree in computer science or related engineering fields. 2-4 years of software development experience with progressive responsibilities. Strong development experience in Java and solid understanding of Object-oriented programming and exposure to Architecture/ Design Patterns, High level/ detailed Technical Design. 2+ years of experience in Core Java, J2EE, Spring MVC, Spring Boot, JPA, Hibernate, Webservices, Maven, and Linux. Strong knowledge of responsive web design and technologies, Angular, REACT, JavaScript. Excellent experience with cloud computing environments like AWS. Previous experience working on AWS services such as Redshift, S3, EC2, etc. Ability to achieve high availability and superior customer experience by implementing Monitoring instrumentation and Telemetry. Experience with Modern Engineering practices such as Agile/ Scrum, DevOps, CICD, TDD and code versioning tools such as Git. Ownership mindset managing end-to-end from requirements, design, development, testing, and deployment to Maintenance of Production systems. Team player; also, Excellent communication skills, both oral and written Preferred, but not. Preferred, But Not Mandatory FinTech experience Developing SOA/ Microservices and using them in building complex, scalable, SaaS products. Experience with AI, and Machine Learning APIs. Experience with GDPR, SOX and other regulatory/ compliance requirements. Experience with Cloud Infrastructure components such as IP Networking, Containers (Docker, Kubernetes etc.) Experience managing development of B2B, B2C Mobile Apps (Android, iOS) (ref:hirist.tech)
Posted 2 weeks ago
9.0 - 20.0 years
0 Lacs
hyderabad, telangana
On-site
Salesforce is offering immediate opportunities for software developers who are passionate about creating impactful code that benefits users, the company, and the industry. Join a team of talented engineers to develop innovative features that customers will love, while ensuring the stability and scalability of our CRM platform. The software engineer role at Salesforce involves architecture, design, implementation, and testing to deliver high-quality products. You will have the chance to engage in code review, mentor junior engineers, and provide technical guidance to the team, depending on your seniority level. We prioritize writing maintainable code that enhances product stability and efficiency. Our team values individual strengths and encourages personal growth, believing that autonomous teams lead to empowered individuals who drive success for the product, company, and customers. Responsibilities for Principal, Lead, or Senior Engineers include: - Developing new components in a rapidly evolving market to enhance scalability and efficiency - Creating high-quality code for millions of application users - Making design decisions based on performance and scalability considerations - Contributing to all phases of the software development life cycle in a Hybrid Engineering model - Building efficient components in a multi-tenant SaaS cloud environment - Providing code review, mentorship, and technical guidance to junior team members Required Skills: - Proficiency in multiple programming languages and platforms - 9 to 20 years of software development experience - Domain knowledge in CCaaS/CPaaS/UCaaS - Experience with WebRTC, SIP, and telephony layer protocols - Strong object-oriented programming and scripting language skills - Proficiency in SQL and relational/non-relational databases - Development experience with SAAS applications on public cloud infrastructure - Knowledge of queues, locks, event-driven architecture, and workload distribution - Understanding of software development best practices and leadership skills - Degree or equivalent relevant experience required Benefits & Perks: - Comprehensive benefits package including well-being reimbursement, parental leave, adoption assistance, and more - Access to on-demand training with Trailhead.com - Opportunities for exposure to executive leadership and coaching - Participation in volunteer activities and community giving initiatives For more information on benefits and perks, please visit https://www.salesforcebenefits.com/,
Posted 2 weeks ago
3.0 years
0 Lacs
Greater Chennai Area
On-site
Responsibilities Participate in requirements definition, analysis, and the design of logical and physical data models for Dimensional Data Model, NoSQL, or Graph Data Model. Lead data discovery discussions with Business in JAD sessions and map the business requirements to logical and physical data modeling solutions. Conduct data model reviews with project team members. Capture technical metadata through data modeling tools. Ensure database designs efficiently support BI and end user requirements. Drive continual improvement and enhancement of existing systems. Collaborate with ETL/Data Engineering teams to create data process pipelines for data ingestion and transformation. Collaborate with Data Architects for data model management, documentation, and version control. Maintain expertise and proficiency in the various application areas. Maintain current knowledge of industry trends and standards. Required Skills Strong data analysis and data profiling skills. Strong conceptual, logical, and physical data modeling for VLDB Data Warehouse and Graph DB. Hands-on experience with modeling tools such as ERWIN or another industry-standard tool. Fluent in both normalized and dimensional model disciplines and techniques. Minimum of 3 years' experience in Oracle Database. Hands-on experience with Oracle SQL, PL/SQL, or Cypher. Exposure to Databricks Spark, Delta Technologies, Informatica ETL, or other industry-leading tools. Good knowledge or experience with AWS Redshift and Graph DB design and management. Working knowledge of AWS Cloud technologies, mainly on the services of VPC, EC2, S3, DMS, and Glue. Bachelor's degree in Software Engineering, Computer Science, or Information Systems (or equivalent experience). Excellent verbal and written communication skills, including the ability to describe complex technical concepts in relatable terms. Ability to manage and prioritize multiple workstreams with confidence in making decisions about prioritization. Data-driven mentality. Self-motivated, responsible, conscientious, and detail-oriented. Effective oral and written communication skills. Ability to learn and maintain knowledge of multiple application areas. Understanding of industry best practices pertaining to Quality Assurance concepts and Level : Bachelor's degree in Computer Science, Engineering, or relevant fields with 3+ years of experience as a Data and Solution Architect supporting Enterprise Data and Integration Applications or a similar role for large-scale enterprise solutions. 3+ years of experience in Big Data Infrastructure and tuning experience in Lakehouse Data Ecosystem, including Data Lake, Data Warehouses, and Graph DB. AWS Solutions Architect Professional Level certifications. Extensive experience in data analysis on critical enterprise systems like SAP, E1, Mainframe ERP, SFDC, Adobe Platform, and eCommerce systems. Skill Set Required GCP, Data Modelling (OLTP, OLAP), indexing, DBSchema, CloudSQL, BigQuery Data Modeller - Hands-on data modelling for OLTP and OLAP systems. In-Depth knowledge of Conceptual, Logical and Physical data modelling. Strong understanding of Indexing, partitioning, data sharding with practical experience of having done the same. Strong understanding of variables impacting database performance for near-real time reporting and application interaction. Should have working experience on at least one data modelling tool, preferably DBSchema. People with functional knowledge of the mutual fund industry will be a plus. Good understanding of GCP databases like AlloyDB, CloudSQL and BigQuery (ref:hirist.tech)
Posted 2 weeks ago
6.0 - 12.0 years
0 Lacs
pune, maharashtra
On-site
About Calfus: Calfus is a Silicon Valley headquartered software engineering and platforms company with a vision deeply rooted in the Olympic motto "Citius, Altius, Fortius Communiter". At Calfus, we aim to inspire our team to rise faster, higher, and stronger while fostering a collaborative environment to build software at speed and scale. Our primary focus is on creating engineered digital solutions that drive positive impact on business outcomes. Upholding principles of #Equity and #Diversity, we strive to create a diverse ecosystem that extends to the broader society. Join us at #Calfus and embark on an extraordinary journey with us! Position Overview: As a Data Engineer specializing in BI Analytics & DWH, you will be instrumental in crafting and implementing robust business intelligence solutions that empower our organization to make informed, data-driven decisions. Leveraging your expertise in Power BI, Tableau, and ETL processes, you will be responsible for developing scalable architectures and interactive visualizations. This role necessitates a strategic mindset, strong technical acumen, and effective collaboration with stakeholders across all levels. Key Responsibilities: - BI Architecture & DWH Solution Design: Develop and design scalable BI Analytical & DWH Solution aligning with business requirements, utilizing tools like Power BI and Tableau. - Data Integration: Supervise ETL processes through SSIS to ensure efficient data extraction, transformation, and loading into data warehouses. - Data Modelling: Establish and maintain data models that support analytical reporting and data visualization initiatives. - Database Management: Employ SQL for crafting intricate queries, stored procedures, and managing data transformations via joins and cursors. - Visualization Development: Spearhead the design of interactive dashboards and reports in Power BI and Tableau while adhering to best practices in data visualization. - Collaboration: Engage closely with stakeholders to gather requirements and translate them into technical specifications and architecture designs. - Performance Optimization: Analyze and optimize BI solutions for enhanced performance, scalability, and reliability. - Data Governance: Implement data quality and governance best practices to ensure accurate reporting and compliance. - Team Leadership: Mentor and guide junior BI developers and analysts to cultivate a culture of continuous learning and improvement. - Azure Databricks: Utilize Azure Databricks for data processing and analytics to seamlessly integrate with existing BI solutions. Qualifications: - Bachelor's degree in computer science, Information Systems, Data Science, or a related field. - 6-12 years of experience in BI architecture and development, with a strong emphasis on Power BI and Tableau. - Proficiency in ETL processes and tools, particularly SSIS. Strong command over SQL Server, encompassing advanced query writing and database management. - Proficient in exploratory data analysis using Python. - Familiarity with the CRISP-DM model. - Ability to work with various data models and databases like Snowflake, Postgres, Redshift, and MongoDB. - Experience with visualization tools such as Power BI, QuickSight, Plotly, and Dash. - Strong programming foundation in Python for data manipulation, analysis, serialization, database interaction, data pipeline and ETL tools, cloud services, and more. - Familiarity with Azure SDK is a plus. - Experience with code quality management, version control, collaboration in data engineering projects, and interaction with REST APIs and web scraping tasks is advantageous. Calfus Inc. is an Equal Opportunity Employer.,
Posted 2 weeks ago
0.0 - 4.0 years
0 Lacs
kozhikode, kerala
On-site
We are seeking a talented and creative 3D Designer to be a part of our team. As a 3D Designer, your primary responsibility will be to produce top-notch 3D models, environments, textures, and visualizations that are in line with the project objectives. Collaboration with diverse teams will be essential to translate ideas into reality while ensuring coherence and ingenuity in all 3D design projects. The ideal candidate should possess the following skills and qualifications: - Proficiency in various 3D modeling software such as Blender, 3ds Max, Maya, Cinema 4D, Sketchup. - Strong knowledge of texturing, lighting, shading, and rendering techniques. - Experience working with render engines like V-Ray, Arnold, Redshift, or real-time engines such as Unity or Unreal Engine. - A solid understanding of composition, color theory, and visual storytelling. - Capable of managing multiple projects simultaneously and meeting deadlines effectively. - High attention to detail and adept at creative problem-solving. - A portfolio that demonstrates a diverse range of high-quality 3D design projects. This position is available as full-time, permanent, and suitable for freshers. Location: On-site,
Posted 2 weeks ago
2.0 - 6.0 years
0 Lacs
karnataka
On-site
The role within Niro Money, Data and Analytics team involves translating data into actionable insights to enhance marketing ROI, business growth, and customer experience for various financial products like personal loans, home loans, credit cards, and insurance. You will be required to have a strong background in data analytics and deliver strategic recommendations to key stakeholders and business heads. You will lead, mentor, and develop a high performing team of data analysts and data scientists focused on building decision science models and segmentation to predict customer behaviors. Additionally, you will develop data-driven strategies to optimize marketing ROI for different financial products such as personal loans and credit cards. Collaborating with the Partnership & Marketing team, you will conduct marketing experiments to improve funnel conversion and measure the effectiveness of marketing campaigns and experiments, recommending necessary changes to enhance customer journey-related product changes. In this role, you will foster a culture of collaboration, innovation, and data-driven decision-making across multiple teams within Niro. Managing multiple analytics projects simultaneously, you will prioritize them based on potential business impacts and ensure timely and accurate project completion. You will also be responsible for project planning and monitoring, promptly addressing challenges to keep projects on track. Additionally, you will partner with Data Engineering, Technology, and Product teams to develop and implement data capabilities for running marketing experiments and delivering actionable insights at scale. The ideal candidate for this role should possess a Master's degree in statistics, mathematics, data science, economics, or BTech in computer science or engineering. You should have at least 5 years of hands-on experience in decision science analytics and building data-driven strategies, preferably in the financial services industry. Additionally, you should have 2+ years of experience in managing and leading a team of data analysts and data scientists, as well as hands-on experience in statistical model development using Python packages like Scikit learn, XGBoost, Stats models, or decision tree tools. Proficiency in SQL and Python is essential, along with a proven track record of decision-making and problem-solving based on analytics. Experience with Snowflake, AWS Athena/S3, Redshift, and BI Tools like AWS Quicksight would be advantageous. You should have a strong analytical mindset, the ability to evaluate complex scenarios, and make data-driven decisions. Being creative and curious with a willingness to learn new tools and techniques is crucial. A data-oriented personality, along with excellent communication and interpersonal skills to collaborate effectively with diverse stakeholders, will be key to success in this role.,
Posted 2 weeks ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title: Senior Manager - Data Engineering Career Level - E Introduction To Role Join our Commercial IT Data Analytics & AI (DAAI) team as a Data Engineering Lead, where you will play a pivotal role in ensuring the quality and stability of our data platforms built on AWS services, Databricks, and Snaplogic. Based in Chennai GITC, you will drive the quality engineering strategy, lead a team of quality engineers, and contribute to the overall success of our data platform. Accountabilities As the Data Engineering Lead for data platforms, you will play a pivotal role for providing leadership and mentorship to your team, driving the adoption of high-quality engineering standards, and fostering effective collaboration across departments. You will lead the design, development, and maintenance of scalable and secure data infrastructure and tools to support the data analytics and data science teams. You will also develop and implement data and data engineering quality assurance strategies and plans tailored to data product build and operations. Essential Skills/Experience Bachelor’s degree or equivalent in Computer Engineering, Computer Science, or a related field Proven experience in a product quality engineering or similar role, with at least 3 years of experience in managing and leading a team. Experience of working within a quality and compliance environment and application of policies, procedures, and guidelines A broad understanding of cloud architecture (preferably in AWS) Strong experience in Databricks, Pyspark and the AWS suite of applications (like S3, Redshift, Lambda, Glue, EMR). Proficiency in programming languages such as Python Experienced in Agile Development techniques and Methodologies. Solid understanding of data modelling, ETL processes and data warehousing concepts Excellent communication and leadership skills, with the ability to collaborate effectively with the technical and non-technical stakeholders. Experience with big data technologies such as Hadoop or Spark Certification in AWS or Databricks. Prior significant experience working in Pharmaceutical or Healthcare industry IT environment. When we put unexpected teams in the same room, we unleash bold thinking with the power to inspire life-changing medicines. In-person working gives us the platform we need to connect, work at pace and challenge perceptions. That's why we work, on average, a minimum of three days per week from the office. But that doesn't mean we're not flexible. We balance the expectation of being in the office while respecting individual flexibility. Join us in our unique and ambitious world. At AstraZeneca, we are committed to disrupting an industry and changing lives. Our work has a direct impact on patients, transforming our ability to develop life-changing medicines. We empower the business to perform at its peak and lead a new way of working, combining cutting-edge science with leading digital technology platforms and data. We dare to lead, applying our problem-solving mindset to identify and tackle opportunities across the whole enterprise. Our spirit of experimentation is lived every day through our events like hackathons. We enable AstraZeneca to perform at its peak by delivering world-class technology and data solutions. Are you ready to be part of a team that has the backing to innovate, disrupt an industry and change lives? Apply now to join us on this exciting journey! Date Posted 21-Jul-2025 Closing Date 25-Jul-2025 AstraZeneca embraces diversity and equality of opportunity. We are committed to building an inclusive and diverse team representing all backgrounds, with as wide a range of perspectives as possible, and harnessing industry-leading skills. We believe that the more inclusive we are, the better our work will be. We welcome and consider applications to join our team from all qualified candidates, regardless of their characteristics. We comply with all applicable laws and regulations on non-discrimination in employment (and recruitment), as well as work authorization and employment eligibility verification requirements.
Posted 2 weeks ago
5.0 - 7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
We’re Hiring : AWS Data Engineers at Coforge Ltd . Immediate Joiners Preferred. Locations: Hyderabad Only. Experience Required: 5 to 7 Years Send your CV to Gaurav.2.Kumar@coforge.com About the Role: As an AWS Data Engineer, you will be responsible for designing, developing, and optimizing robust, scalable, and secure data pipelines and infrastructure on AWS. You’ll work with diverse datasets and collaborate with cross-functional teams to enable data-driven decision-making across the organization. Key Responsibilities Design and develop end-to-end ETL/ELT pipelines using AWS Glue, Lambda, Step Functions, Kinesis, and Airflow Leverage AWS services like S3, Redshift, RDS, DynamoDB, EMR, Athena, and Lake Formation Write and optimize Python code for data processing and automation (PySpark experience preferred) Design and manage relational and NoSQL databases (MS SQL Server, PostgreSQL, MongoDB) Implement data modeling, warehousing, and governance best practices Monitor and optimize performance, scalability, and cost of data infrastructure Collaborate with data scientists, analysts, and engineering teams Maintain clear technical documentation and stay updated with AWS innovations. Required Skills & Qualifications Bachelor’s degree in Computer Science or related field 6–8 years of hands-on experience in data engineering on AWS Strong proficiency in Python and AWS data services Expertise in MS SQL Server and other databases Experience with orchestration tools like Step Functions and Airflow Solid understanding of data lakes, warehousing, and ETL/ELT methodologies Excellent communication and problem-solving skills
Posted 2 weeks ago
8.0 - 10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
8-10years of relevant work experience showing growth as a Data Engineer. Hands On programming experience Implementation Experience on Kafka, Kinesis, Spark, AWS Glue, AWS LakeFormation. Experience of performance optimization in Batch and Real time processing applications Expertise in Data Governance and Data Security Implementation. Good hands-on design and programming skills building reusable tools and products Experience developing in AWS or similar cloud platforms. Preferred:, ECS, EKS, S3, EMR, DynamoDB, Aurora, Redshift, QuickSight or similar. Familiarity with systems with very high volume of transactions, micro service design, or data processing pipelines (Spark). Knowledge and hands-on experience with server-less technologies such as Lambda, MSK, MWAA, Kinesis Analytics a plus. Expertise in practices like Agile, Peer reviews, Continuous Integration.
Posted 2 weeks ago
3.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
We are looking for a skilled and passionate AWS Data Engineer to join our dynamic data engineering team. The ideal candidate will have strong experience in building scalable data pipelines and solutions using AWS, PySpark, Databricks, and Snowflake. Key Responsibilities:- Design, develop, and maintain large-scale data pipelines on AWS using PySpark and Databricks. Work with Snowflake to perform data warehousing tasks including data loading, transformation, and optimization. Build efficient and scalable ETL/ELT workflows to support analytics and reporting. Implement data quality checks, monitoring, and performance tuning of ETL processes. Ensure data governance, security, and compliance in all solutions developed. Required Skills & Qualifications Bachelor's degree in Computer Science, Information Technology, or a related field. 3+ years of experience as a Data Engineer with strong exposure to AWS cloud services (S3, Lambda, Glue, Redshift, etc.). Hands-on experience with PySpark and Databricks for big data processing. Proficiency in working with Snowflake.
Posted 2 weeks ago
2.0 years
0 Lacs
Mahesana, Gujarat, India
Remote
bEdge Tech Services ( www.bedgetechinc.com ) is urgently seeking a passionate and experienced Data Engineer to join our dynamic team in Ahmedabad, Gujarat! Are you ready to shape the future of tech talent? We're building a dedicated team to develop training materials, conduct live sessions, and mentor US-based clients and students. This is a unique opportunity to blend your data engineering expertise with your passion for teaching and knowledge sharing. This is a full-time, Work From Office position based in Ahmedabad. No remote or hybrid options are available. Location: Ahmedabad, Gujarat, India (Work From Office ONLY) Experience: 2 - 4 years Salary: ₹35,000 - ₹40,000 per month + Performance Incentives About the Role: As a key member of our US Client/Student Development team, you'll be instrumental in empowering the next generation of data engineering professionals. Your primary focus will be on: Content Creation: Designing and developing comprehensive and engaging training materials, modules, and exercises covering various aspects of data pipeline design, ETL, and data warehousing. Live Session Delivery: Conducting interactive live online sessions, workshops, and webinars, demonstrating complex data engineering concepts and practical implementations. Mentorship: Providing guidance, support, and constructive feedback to students/clients on their data engineering projects, helping them design robust data solutions and troubleshoot issues. Curriculum Development: Collaborating with the team to continuously refine and update data engineering course curricula based on industry trends, new technologies, and student feedback. Key Responsibilities: Develop high-quality training modules on data pipeline design, ETL/ELT processes, data warehousing concepts (dimensional modeling, Kimball/Inmon), and data lake architectures. Prepare and deliver engaging live sessions on setting up, managing, and optimizing data infrastructure on cloud platforms (AWS, Azure, GCP). Guide and mentor students in building scalable and reliable data ingestion, processing, and storage solutions using various tools and technologies. Explain best practices for data quality, data governance, data security, and performance optimization in data engineering. Create practical assignments, hands-on labs, and capstone projects that simulate real-world data engineering challenges. Stay updated with the latest advancements in big data technologies, cloud data services, and data engineering best practices. Required Skills & Experience: Experience: 2 to 4 years of hands-on industry experience as a Data Engineer or in a similar role focused on data infrastructure. Communication: Excellent and compulsory English communication skills (both written and verbal) – ability to articulate complex technical concepts clearly and concisely to diverse audiences is paramount. Passion for Teaching: A strong desire and aptitude for training, mentoring, and guiding aspiring data engineering professionals. Analytical Skills: Strong problem-solving abilities, logical thinking, and a structured approach to data infrastructure design. Work Ethic: Highly motivated, proactive, and able to work independently as well as collaboratively in a fast-paced environment. Location Commitment: Must be willing to work from our Ahmedabad office full-time . Required Technical Skills: Strong programming skills in Python (or Java/Scala) for data processing and scripting. Expertise in SQL and experience with relational database systems (e.g., PostgreSQL, MySQL, SQL Server) and/or NoSQL databases (e.g., MongoDB, Cassandra). Proven experience with ETL/ELT tools and frameworks (e.g., Apache Airflow, Talend, Fivetran, Data Factory). Hands-on experience with at least one major cloud platform (AWS, Azure, or GCP) and its data services (e.g., S3, Redshift, EMR, Glue, Data Lake, Data Factory, BigQuery, Dataproc). Familiarity with data warehousing concepts and data modeling techniques (Star Schema, Snowflake Schema). Experience with big data technologies (e.g., Apache Spark, Hadoop) is a significant advantage. Understanding of data governance, data security, and data lineage principles. What We Offer: A competitive salary and attractive performance-based incentives . The unique opportunity to directly impact the careers of aspiring tech professionals. A collaborative, innovative, and supportive work environment. Continuous learning and professional growth opportunities in a niche domain. Be a part of a rapidly growing team focused on global client engagement.
Posted 2 weeks ago
5.0 - 7.0 years
0 Lacs
Rajasthan, India
Remote
At GKM IT , we’re passionate about building seamless digital experiences powered by robust and intelligent data systems. We’re on the lookout for a Data Engineer - Senior II to architect and maintain high-performance data platforms that fuel decision-making and innovation. If you enjoy designing scalable pipelines, optimising data systems, and leading with technical excellence, you’ll thrive in our fast-paced, outcome-driven culture. You’ll take ownership of building reliable, secure, and scalable data infrastructure—from streaming pipelines to data lakes. Working closely with engineers, analysts, and business teams, you’ll ensure that data is not just available, but meaningful and impactful across the organization. Requirements 5 to 7 years of experience in data engineering Architect and maintain scalable, secure, and reliable data platforms and pipelines Design and implement data lake/data warehouse solutions such as Redshift, BigQuery, Snowflake, or Delta Lake Build real-time and batch data pipelines using tools like Apache Airflow, Kafka, Spark, and DBT Ensure data governance, lineage, quality, and observability Collaborate with stakeholders to define data strategies, architecture, and KPIs Lead code reviews and enforce best practices Mentor junior and mid-level engineers Optimize query performance, data storage, and infrastructure Integrate CI/CD workflows for data deployment and automated testing Evaluate and implement new tools and technologies as required Demonstrate expert-level proficiency in Python and SQL Possess deep knowledge of distributed systems and data processing frameworks Be proficient in cloud platforms (AWS, GCP, or Azure), containerization, and CI/CD processes Have experience with streaming platforms like Kafka or Kinesis and orchestration tools Be highly skilled with Airflow, DBT, and data warehouse performance tuning Exhibit strong leadership, communication, and mentoring skills Benefits We don’t just hire employees—we invest in people. At GKM IT, we’ve designed a benefits experience that’s thoughtful, supportive, and actually useful. Here’s what you can look forward to: Top-Tier Work Setup You’ll be equipped with a premium MacBook and all the accessories you need. Great tools make great work. Flexible Schedules & Remote Support Life isn’t 9-to-5. Enjoy flexible working hours, emergency work-from-home days, and utility support that makes remote life easier. Quarterly Performance Bonuses We don’t believe in waiting a whole year to celebrate your success. Perform well, and you’ll see it in your pay check—quarterly. Learning is Funded Here Conferences, courses, certifications—if it helps you grow, we’ve got your back. We even offer a dedicated educational allowance. Family-First Culture Your loved ones matter to us too. From birthday and anniversary vouchers (Amazon, BookMyShow) to maternity and paternity leaves—we’re here for life outside work. Celebrations & Gifting, The GKM IT Way Onboarding hampers, festive goodies (Diwali, Holi, New Year), and company anniversary surprises—it’s always celebration season here. Team Bonding Moments We love food, and we love people. Quarterly lunches, dinners, and fun company retreats help us stay connected beyond the screen. Healthcare That Has You Covered Enjoy comprehensive health insurance for you and your family—because peace of mind shouldn’t be optional. Extra Rewards for Extra Effort Weekend work doesn’t go unnoticed, and great referrals don’t go unrewarded. From incentives to bonuses—you’ll feel appreciated.
Posted 2 weeks ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Hi, This is Prashant from Triunity Software Inc Please follow me on Linkedin - https://www.linkedin.com/in/usaprashantrathore/ Title- Data Analyst Mode of Office : Hybrid, (3 days in week ) or might be 5 Days working. Pref only localities Time Zone : IST zone (Overlap of 2-3 PST hours sometime if any client call. ) JD : Responsibilities: • Perform advanced analytics like cohort analysis, scenario analysis, time series analysis, and predictive analysis, creating powerful visualizations to communicate the results. • Articulate assumptions, analyses and interpretations of data in a variety of modes. • Design data models that define how the tables, columns, and data elements from different sources are connected and stored, based on our reporting and analytics requirements. • Work closely with BI engineers to develop efficient, highly performant, scalable reporting and analytics solutions. • Query data from warehouses like Snowflake using SQL. • Validate and QA data to ensure consistent data accuracy and quality. • Troubleshoot data issues and conduct root cause analysis when reporting data is in question. Requirements : Hands-On to SQL &Tableau • 5+ years relevant experience in Data Analytics, BI Analytics, or BI Engineering, preferably with a globally recognized organization. • Expert level skills writing complex SQL queries to create views in warehouses like Snowflake, Redshift, SQL Server, Oracle, BigQuery. • Advanced skills in designing and creating data models and dashboards in BI tools like Tableau, Domo, Looker, etc. • Intermediate level skills in analytical tools like Excel, Google Sheets, or Power BI (complex formulas, lookups, pivots, etc.) • Bachelor’s/Advanced degree in Data Analytics, Data Science, Information Systems, Computer Science, Applied Math, Statistics, or similar field of study. • Willingness to work with internal team members and stakeholders in other time zones The People Analytics team is seeking a Data Analyst - an experienced analytics professional who is passionate about unleashing the power of data to help us inform decision making, achieve our strategic objectives, and hire and retain world-class talent. As an integral part of the team, the Data Analyst will leverage their analytical skills and business acumen to turn data into knowledge and drive business success. The ideal candidate will play a key role in designing and developing advanced analytics and reporting solutions based on business requirements, and in delivering data-driven insights and recommendations that provide answers to critical HR related questions. He/she must be able to validate data and ensure data quality, design data models, perform sophisticated analyses, and create visualizations using BI tools like Tableau. In particular, he/she should possess strong SQL skills and be experienced in working with BI Engineers to troubleshoot and systematically eliminate data issues.
Posted 2 weeks ago
2.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Flutter Entertainment Flutter Entertainment is a global leader in sports betting, gaming, and entertainment, with annual revenues of $11.7 Bn and a customer base of over 12 million players (in 2023) driven by a portfolio of iconic brands, including Paddy Power, Betfair, FanDuel, PokerStars, Junglee Games and Sportsbet. Listed on both the New York Stock Exchange (NYSE) and the London Stock Exchange (LSE), Flutter was recently included in TIME's 100 Most Influential Companies of 2024 in the 'Pioneers' section. Our ambition is to transform global gaming and betting to deliver long-term growth and a positive, sustainable future for our sector. Working at Flutter is a chance to work with a growing portfolio of brands across a range of opportunities. We will support you every step of the way to help you grow. Just like our brands, we ensure our people have everything they need to succeed. FLUTTER ENTERTAINMENT INDIA Our Hyderabad office, located in one of India’s premier technology parks is the Global Capability Center for Flutter Entertainment. A center of expertise and innovation, this hub is now home to over 1000+ employees working across Customer Service Operations, Data and Technology, Finance Operations, HR Operations, Procurement Operations, and other key enabling functions. We are committed to crafting impactful solutions for all our brands and divisions to power Flutter's incredible growth and global impact. With the scale of a leader and the mindset of a challenger, we’re dedicated to creating a brighter future for our customers, colleagues, and communities. Overview Of The Role We are looking for a Data Analyst to join our Data & Analytics (ODA) department in Hyderabad, India . Delivering deep insights and analytics within a fast-paced culture in the world’s largest online gaming company, you will join a team of exceptional data analysts who shape the future of online gaming through detailed analysis for safer gambling, fraud, customer experience, regulatory, overall operations and other departments. Your work will have a tangible impact on our players’ experience and our business’ direction. You shall dive into databases, querying large volumes of behavioural data to create actionable insights and deliver recommendations to department heads and directors. As well as leading in-depth analysis of customer behaviour, you will develop dashboards and executive summaries for various audiences, establish and track key performance indicators and provide ad-hoc analytical support. Your work will bring our extensive data to life, adding insight to key decision-making processes and optimising our systems to keep our site safe, sustainable and where the best play. KEY RESPONSIBILITES Extract data from our databases in various environments ( DB2, MS SQL Server and Azure ) then process and interpret using statistical techniques in Python and Excel Identify patterns and emerging trends with detailed analysis to offer constructive suggestions and estimate the impact their potential impact to customers and business Create presentations that synthesise findings from multiple analyses to inform strategic decision-making of senior leadership Develop interactive dashboards that highlight key metrics and trends in customer behaviour and payment fraud Design infographics that visually communicate complex data and analysis to a non-technical audience, such as regulators or customer support teams Engage with global business stakeholders on key projects, understand how each area of the business works to provide data-driven insights and guidance Help define product roadmaps by identifying opportunities for improvement based on data and analysis. TO EXCEL IN THIS ROLE, YOU WILL NEED TO HAVE 2 to 4 years of relevant work experienc e as a Data Analyst or Data Scientist Bachelor’s degree in a quantitative field such as Science, Mathematics, Economics, Engineering Proficiency in SQL with the ability to create complex queries from scratch Advanced expertise in Microsoft Excel, PowerPoint and Word Experience presenting and reporting analyses to stakeholders Ability to create high-quality data visualizations , local & server-based automation solutions and presentations using PowerPoint and tools such as MicroStrategy, Tableau, or PowerBI Experience with programming (e.g., Python, R etc.) Applied experience with statistical techniques such as hypothesis testing, causal impact analysis, regression analysis, or time series analysis Excellent organisational and communication skills with the ability to manage day-to-day work independently and consistently deliver quality work within deadlines Desired Qualifications Experience with data warehouse technologies (e.g., MS SQL Server Management Studio, Amazon Redshift) is a plus Certifications (MOOCs) on Data Analysis, Python, SQL, ETL, DSA, ML/DL, Data Science, etc. are desirable Benefits We Offer Access to Learnerbly, Udemy, and a Self-Development Fund for upskilling. Career growth through Internal Mobility Programs. Comprehensive Health Insurance for you and dependents. Well-Being Fund and 24/7 Assistance Program for holistic wellness. Hybrid Model: 2 office days/week with flexible leave policies, including maternity, paternity, and sabbaticals. Free Meals, Cab Allowance, and a Home Office Setup Allowance. Employer PF Contribution, gratuity, Personal Accident & Life Insurance. Sharesave Plan to purchase discounted company shares. Volunteering Leave and Team Events to build connections. Recognition through the Kudos Platform and Referral Rewards. WHY CHOOSE US Flutter is an equal-opportunity employer and values the unique perspectives and experiences that everyone brings. Our message to colleagues and stakeholders is clear: everyone is welcome, and every voice matters. We have ambitious growth plans and goals for the future. Here's an opportunity for you to play a pivotal role in shaping the future of Flutter Entertainment India
Posted 2 weeks ago
2.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Flutter Entertainment Flutter Entertainment is a global leader in sports betting, gaming, and entertainment, with annual revenues of $11.7 Bn and a customer base of over 12 million players (in 2023) driven by a portfolio of iconic brands, including Paddy Power, Betfair, FanDuel, PokerStars, Junglee Games and Sportsbet. Listed on both the New York Stock Exchange (NYSE) and the London Stock Exchange (LSE), Flutter was recently included in TIME's 100 Most Influential Companies of 2024 in the 'Pioneers' section. Our ambition is to transform global gaming and betting to deliver long-term growth and a positive, sustainable future for our sector. Working at Flutter is a chance to work with a growing portfolio of brands across a range of opportunities. We will support you every step of the way to help you grow. Just like our brands, we ensure our people have everything they need to succeed. FLUTTER ENTERTAINMENT INDIA Our Hyderabad office, located in one of India’s premier technology parks is the Global Capability Center for Flutter Entertainment. A center of expertise and innovation, this hub is now home to over 1000+ employees working across Customer Service Operations, Data and Technology, Finance Operations, HR Operations, Procurement Operations, and other key enabling functions. We are committed to crafting impactful solutions for all our brands and divisions to power Flutter's incredible growth and global impact. With the scale of a leader and the mindset of a challenger, we’re dedicated to creating a brighter future for our customers, colleagues, and communities. Overview Of The Role We are looking for a Data Analyst to join our Data & Analytics (ODA) department in Hyderabad, India . Delivering deep insights and analytics within a fast-paced culture in the world’s largest online gaming company, you will join a team of exceptional data analysts who shape the future of online gaming through detailed analysis for safer gambling, fraud, customer experience, regulatory, overall operations and other departments. Your work will have a tangible impact on our players’ experience and our business’ direction. You shall dive into databases, querying large volumes of behavioural data to create actionable insights and deliver recommendations to department heads and directors. As well as leading in-depth analysis of customer behaviour, you will develop dashboards and executive summaries for various audiences, establish and track key performance indicators and provide ad-hoc analytical support. Your work will bring our extensive data to life, adding insight to key decision-making processes and optimising our systems to keep our site safe, sustainable and where the best play. KEY RESPONSIBILITES Extract data from our databases in various environments ( DB2, MS SQL Server and Azure ) then process and interpret using statistical techniques in Python and Excel Identify patterns and emerging trends with detailed analysis to offer constructive suggestions and estimate the impact their potential impact to customers and business Create presentations that synthesise findings from multiple analyses to inform strategic decision-making of senior leadership Develop interactive dashboards that highlight key metrics and trends in customer behaviour and payment fraud Design infographics that visually communicate complex data and analysis to a non-technical audience, such as regulators or customer support teams Engage with global business stakeholders on key projects, understand how each area of the business works to provide data-driven insights and guidance Help define product roadmaps by identifying opportunities for improvement based on data and analysis. TO EXCEL IN THIS ROLE, YOU WILL NEED TO HAVE 2 to 4 years of relevant work experienc e as a Data Analyst or Data Scientist Bachelor’s degree in a quantitative field such as Science, Mathematics, Economics, Engineering Proficiency in SQL with the ability to create complex queries from scratch Advanced expertise in Microsoft Excel, PowerPoint and Word Experience presenting and reporting analyses to stakeholders Ability to create high-quality data visualizations , local & server-based automation solutions and presentations using PowerPoint and tools such as MicroStrategy, Tableau, or PowerBI Experience with programming (e.g., Python, R etc.) Applied experience with statistical techniques such as hypothesis testing, causal impact analysis, regression analysis, or time series analysis Excellent organisational and communication skills with the ability to manage day-to-day work independently and consistently deliver quality work within deadlines Desired Qualifications Experience with data warehouse technologies (e.g., MS SQL Server Management Studio, Amazon Redshift) is a plus Certifications (MOOCs) on Data Analysis, Python, SQL, ETL, DSA, ML/DL, Data Science, etc. are desirable Benefits We Offer Access to Learnerbly, Udemy, and a Self-Development Fund for upskilling. Career growth through Internal Mobility Programs. Comprehensive Health Insurance for you and dependents. Well-Being Fund and 24/7 Assistance Program for holistic wellness. Hybrid Model: 2 office days/week with flexible leave policies, including maternity, paternity, and sabbaticals. Free Meals, Cab Allowance, and a Home Office Setup Allowance. Employer PF Contribution, gratuity, Personal Accident & Life Insurance. Sharesave Plan to purchase discounted company shares. Volunteering Leave and Team Events to build connections. Recognition through the Kudos Platform and Referral Rewards. WHY CHOOSE US Flutter is an equal-opportunity employer and values the unique perspectives and experiences that everyone brings. Our message to colleagues and stakeholders is clear: everyone is welcome, and every voice matters. We have ambitious growth plans and goals for the future. Here's an opportunity for you to play a pivotal role in shaping the future of Flutter Entertainment India
Posted 2 weeks ago
6.0 - 10.0 years
8 - 12 Lacs
Hyderabad
Work from Office
Responsibilities: Design, build, and optimize data pipelines to ingest, process, transform, and load data from various sources into our data platform Implement and maintain ETL workfl ows using tools like Debezium, Kafka, Airfl ow, and Jenkins to ensure reliable and timely data processing Develop and optimize SQL and NoSQL database schemas, queries, and stored procedures for effi cient data retrieval and processing *** Work with both relational databases (MySQL, PostgreSQL) and NoSQL databases (MongoDB, DocumentDB) to build scalable data solutions Design and implement data warehouse solutions that support analytical needs and machine learning applications Collaborate with data scientists and ML engineers to prepare data for AI/ML models and implement data-driven features Implement data quality checks, monitoring, and alerting to ensure data accuracy and reliability Optimize query performance across various database systems through indexing, partitioning, and query refactoring Develop and maintain documentation for data models, pipelines, and processes Collaborate with cross-functional teams to understand data requirements and deliver solutions that meet business needs Stay current with emerging technologies and best practices in data engineering Requirements: 5+ years of experience in data engineering or related roles with a proven track record of building data pipelines and infrastructure Strong profi ciency in SQL and experience with relational databases like MySQL and PostgreSQL Hands-on experience with NoSQL databases such as MongoDB or AWS DocumentDB Expertise in designing, implementing, and optimizing ETL processes using tools like Kafka, Debezium, Airfl ow, or similar technologies Experience with data warehousing concepts and technologies Solid understanding of data modeling principles and best practices for both operational and analytical systems Proven ability to optimize database performance, including query optimization, indexing strategies, and database tuning Experience with AWS data services such as RDS, Redshift, S3, Glue, Kinesis, and ELK stack Profi ciency in at least one programming language (Python, Node.js, Java) Experience with version control systems (Git) and CI/CD pipelines Job Description: Experience with graph databases (Neo4j, Amazon Neptune) Knowledge of big data technologies such as Hadoop, Spark, Hive, and data lake architectures Experience working with streaming data technologies and real-time data processing Familiarity with data governance and data security best practices Experience with containerization technologies (Docker, Kubernetes) Understanding of fi nancial back-offi ce operations and FinTech domain Experience working in a high-growth startup environment
Posted 2 weeks ago
3.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Why We Work at Dun & Bradstreet Dun & Bradstreet unlocks the power of data through analytics, creating a better tomorrow. Each day, we are finding new ways to strengthen our award-winning culture and accelerate creativity, innovation and growth. Our 6,000+ global team members are passionate about what we do. We are dedicated to helping clients turn uncertainty into confidence, risk into opportunity and potential into prosperity. Bold and diverse thinkers are always welcome. Come join us! Learn more at dnb.com/careers . This role is responsible for maintaining and enhancing our existing Power BI dashboards, ensuring data accuracy, reliability, and consistency across reporting outputs. The analyst will lead or contribute to process improvement and optimization initiatives, such as the upcoming Master Org project, which aims to integrate regional CRMs into a unified Salesforce (SFDC) master tenant. Additionally, the role supports various ad hoc projects and operates across a global footprint, driving data-driven decision-making and operational excellence. Key Responsibilities Maintains the existing Customer Service Power BI Reporting Suite. Handles all aspects of Service Operations projects related to reporting and data collection. Manages the design, creation, and implementation of new Power BI metrics and reports, working closely with business subject matter experts to understand requirements and translate them into a reporting solution. Works with the Customer Service Team to look for opportunities to automate and/or replace existing reporting methods. Performs ad hoc analysis of our data, looking for opportunities for performance improvement within the Customer Service space. Periodically conducts training with end-users, to ensure they optimize their use of the tool. Be a key member of the Master Org project, focusing on our regional reporting needs and supporting Service Operations Lead. Establishing a long-term reporting strategy using both SQL > PBI and SFDC Master Org > Aligned Dashboard reporting. Participate in global cross-functional Teams addressing process improvement opportunities within Global Customer Service organization (GCS) Periodically participate in deep dives into issues highlighted by GCS Leadership. Key Skills Bachelor’s degree with a minimum of 3 years of relevant professional experience. Familiarity with the structure and operations of Global Customer Services, or experience in a comparable environment, is preferred. Proven expertise in Microsoft Power BI and SQL, with at least 3 years of hands-on experience developing and maintaining Power BI reports in a production setting. Proficient in Python and working with relational databases. Experience with AWS services such as Redshift, EC2, and RDS is a strong plus. Experience with SFDC, Five9 and Qualtrics is a plus Ability to work independently and with diverse teams, in a dynamic environment, as part of a global team while managing multiple priorities. Strong analytical, conceptual, and problem-solving skills. Excellent communicate skills (oral & written) Proficiency in Microsoft Office Suite skills Show an ownership mindset in everything you do. Be a problem solver, be curious and be inspired to take action. Be proactive, seek ways to collaborate and connect with people and teams in support of driving success. Continuous growth mindset, keep learning through social experiences and relationships with stakeholders, experts, colleagues and mentors as well as widen and broaden your competencies through structural courses and programs. Where applicable, fluency in English and languages relevant to the working market. All Dun & Bradstreet job postings can be found at https://www.dnb.com/about-us/careers-and-people/joblistings.html and https://jobs.lever.co/dnb . Official communication from Dun & Bradstreet will come from an email address ending in @dnb.com. Notice to Applicants: Please be advised that this job posting page is hosted and powered by Lever. Your use of this page is subject to Lever's Privacy Notice and Cookie Policy , which governs the processing of visitor data on this platform.
Posted 2 weeks ago
5.0 years
6 - 10 Lacs
Hyderābād
On-site
At EY, we’re all in to shape your future with confidence. We’ll help you succeed in a globally connected powerhouse of diverse teams and take your career wherever you want it to go. Join EY and help to build a better working world. Job Title: AWS Senior Data Engineer Experience Required: Minimum 5+ years Job Summary: We are seeking a skilled Data Engineer with a strong background in data ingestion, processing, and storage. The ideal candidate will have experience working with various data sources and technologies, particularly in a cloud environment. You will be responsible for designing and implementing data pipelines, ensuring data quality, and optimizing data storage solutions. Key Responsibilities: Design, develop, and maintain scalable data pipelines for data ingestion and processing using Python, Spark, and AWS services. Work with on-prem Oracle databases, batch files, and Confluent Kafka for data sourcing. Implement and manage ETL processes using AWS Glue and EMR for batch and streaming data. Develop and maintain data storage solutions using Medallion Architecture in S3, Redshift, and Oracle. Collaborate with cross-functional teams to understand data requirements and deliver solutions that meet business needs. Monitor and optimize data workflows using Airflow and other orchestration tools. Ensure data quality and integrity throughout the data lifecycle. Implement CI/CD practices for data pipeline deployment using Terraform and other tools. Utilize monitoring and logging tools such as CloudWatch, Datadog, and Splunk to ensure system reliability and performance. Communicate effectively with stakeholders to gather requirements and provide updates on project status. Technical Skills Required: Proficient in Python for data processing and automation. Strong experience with Apache Spark for large-scale data processing. Familiarity with AWS S3 for data storage and management. Experience with Kafka for real-time data streaming. Knowledge of Redshift for data warehousing solutions. Proficient in Oracle databases for data management. Experience with AWS Glue for ETL processes. Familiarity with Apache Airflow for workflow orchestration. Experience with EMR for big data processing. Mandatory: Strong AWS data engineering skills. Good Additional Skills: Familiarity with Terraform for infrastructure as code. Experience with messaging services such as SNS and SQS. Knowledge of monitoring and logging tools like CloudWatch, Datadog, and Splunk. Experience with AWS DataSync, DMS, Athena, and Lake Formation. Communication Skills: Excellent verbal and written communication skills are mandatory for effective collaboration with team members and stakeholders. EY | Building a better working world EY is building a better working world by creating new value for clients, people, society and the planet, while building trust in capital markets. Enabled by data, AI and advanced technology, EY teams help clients shape the future with confidence and develop answers for the most pressing issues of today and tomorrow. EY teams work across a full spectrum of services in assurance, consulting, tax, strategy and transactions. Fueled by sector insights, a globally connected, multi-disciplinary network and diverse ecosystem partners, EY teams can provide services in more than 150 countries and territories.
Posted 2 weeks ago
0 years
6 - 8 Lacs
Hyderābād
On-site
Ready to shape the future of work? At Genpact, we don’t just adapt to change—we drive it. AI and digital innovation are redefining industries, and we’re leading the charge. Genpact’s AI Gigafactory , our industry-first accelerator, is an example of how we’re scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI , our breakthrough solutions tackle companies’ most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that’s shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions – we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn , X , YouTube , and Facebook . Inviting applications for the role of Consultant- Data Engineer- Databricks! In this role, the Databricks Developer is responsible for solving the real world cutting edge problem to meet both functional and non-functional requirements. You would be part of the data integrity/analysis team in the Banking and financial domain. You will be responsible to independently build data analysis around complex business problems from data available in the client owned or accessible systems. For these tasks, you would be encouraged to understand the business ask/problem, assess the scope, quantity and quality of the available data, prepare and build the code using Pyspark /Databricks, Python programming and loading data in to DWH and Data Mart for downstream consumption team. Responsibilities Extensive hands-on experience on Python ( Pyspark ) and Pyspark with SQL The experience shall be to carry RDDs, Struct types and more on pyspark Exposure to work on Databricks notebook for Pyspark and pyspark with sql coding Good hands on to collaborate with AWS services using Python. Experience with cloud technologies like AWS (S3, Redshift,SNS ) Expertise in developing ETL and batch processes to support data movement Candidate shall be good in communication and SELF - Driven May work in silos with his own deliverables and discussion points with onshore customer. Qualifications we seek in you! Minimum Qualifications / Skills Degree [BE, B.sc. ] Preferred Qualifications Candidate must have good communication skills and client handling Why join Genpact? Be a transformation leader – Work at the cutting edge of AI, automation, and digital innovation Make an impact – Drive change for global enterprises and solve business challenges that matter Accelerate your career – Get hands-on experience, mentorship, and continuous learning opportunities Work with the best – Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture – Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let’s build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training. Job Consultant Primary Location India-Hyderabad Schedule Full-time Education Level Bachelor's / Graduation / Equivalent Job Posting Jul 21, 2025, 1:56:27 AM Unposting Date Ongoing Master Skills List Digital Job Category Full Time
Posted 2 weeks ago
2.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
It's fun to work in a company where people truly BELIEVE in what they're doing! We're committed to bringing passion and customer focus to the business. Job Description This role requires working from our local Hyderabad office 2-3x a week. INTRODUCTION: We are seeking a Data Engineer to join our Data team to deliver, maintain, and evolve our data platform, fulfilling our mission to empower our customers by giving them access to their data through reports powered by ABC Fitness. In this role, you’ll develop and maintain our data infrastructure and work closely with cross-functional teams to translate business requirements into technical solutions, ensuring data integrity, scalability, and efficiency. We are known for being an energized, fun, friendly and customer-focused cross-functional team. WHAT YOU’LL DO: Design, develop, and maintain efficient data pipelines to serve data to report, using various cloud ETL tools. Design, implement and manage data workflows, ensuring seamless data orchestration and integration. Create and optimize SQL objects, including stored procedures, tables and views for great performance. Implement data quality checks and validation processes to ensure accuracy, completeness, and consistency of data across different stages of the pipeline. Monitor data pipelines and processes, troubleshooting issues, and implement solutions to prevent recurrence. Ability to work on own initiative and take responsibility for delivery of high-quality solutions. Collaborate with stakeholders to understand reporting requirements and provide support in developing interactive dashboards using Power BI for data visualization. Maintain comprehensive documentation of data pipelines, workflows, and data models. Adhere to best practices in data engineering and ensure compliance with organizational standards. WHAT YOU’LL NEED: Minimum 2-5 years of experience in a data engineering role Bachelor's degree in computer science, Information Technology, or a related field Experience in data management best practices including demonstrated experience with data profiling, sourcing, and cleansing routines utilizing typical data quality functions involving standardization, transformation, rationalization, linking and matching Proficient in SQL and Python, with the ability to translate complexity into efficient code Experience working on different type of databases, Azure SQL DB, Azure Synapse SQL Pool, AWS Redshift, MySQL, etc… Experience with Azure DevOps and/or GitHub Experience with Azure Data Factory and/or Apache Airflow Effective communication skills (verbal and written) in English Genuine passion about technology and solving data problems Structured thinking with the ability to break down ambiguous problems and propose impactful data modeling designs Ability to use data to inform decision making and drive outcomes Ability to understand, document and convert business requirements into data models Ability to work effectively with a remote team across multiple time zones Driven and self-motivated with excellent organizational skills Comfortable learning innovative technologies and systems All applicants must be able to work from our Hyderabad office 2-3x a week AND IT’S GREAT TO HAVE Experience building data models for Power BI Working knowledge of Gen 2 Azure Data Lake, Storage Account, Blobs, Azure Function, Logic App Working knowledge of AWS S3, EMR, EKR WHAT’S IN IT FOR YOU: Purpose led company with a Values focused culture – Best Life, One Team, Growth Mindset Time Off – competitive PTO plans with 15 Earned accrued leave, 12 days Sick leave, and 12 days Casual leave per year 11 Holidays plus 4 Days of Disconnect – once a quarter, we take a collective breather and enjoy a day off together around the globe. #oneteam Group Mediclaim insurance coverage of INR 500,000 for employee + spouse, 2 kids, and parents or parent-in-laws, and including EAP counseling Life Insurance and Personal Accident Insurance Best Life Perk – we are committed to meeting you wherever you are in your fitness journey with a quarterly reimbursement Premium Calm App – enjoy tranquility with a Calm App subscription for you and up to 4 dependents over the age of 16 Support for working women with financial aid towards crèche facility, ensuring a safe and nurturing environment for their little ones while they focus on their careers. We’re committed to diversity and passion, and encourage you to apply, even if you don’t demonstrate all the listed skillsets! ABC’S COMMITMENT TO DIVERSITY, EQUALITY, BELONGING AND INCLUSION: ABC is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. We are intentional about creating an environment where employees, our clients and other stakeholders feel valued and inspired to reach their full potential and make authentic connections. We foster a workplace culture that embraces each person’s diversity, including the extent to which they are similar or different. ABC leaders believe that an equitable and inclusive culture is not only the right thing to do, it is a business imperative. Read more about our commitment to diversity, equality, belonging and inclusion at abcfitness.com ABOUT ABC: ABC Fitness (abcfitness.com) is the premier provider of software and related services for the fitness industry and has built a reputation for excellence in support for clubs and their members. ABC is the trusted provider to boost performance and create a total fitness experience for over 41 million members of clubs of all sizes whether a multi-location chain, franchise or an independent gym. Founded in 1981, ABC helps over 31,000 gyms and health clubs globally perform better and more profitably offering a comprehensive SaaS club management solution that enables club operators to achieve optimal performance. ABC Fitness is a Thoma Bravo portfolio company, a private equity firm focused on investing in software and technology companies (thomabravo.com). If you like wild growth and working with happy, enthusiastic over-achievers, you'll enjoy your career with us!
Posted 2 weeks ago
7.0 years
1 - 9 Lacs
Bengaluru
On-site
Organization: At CommBank, we never lose sight of the role we play in other people’s financial wellbeing. Our focus is to help people and businesses move forward to progress. To make the right financial decisions and achieve their dreams, targets, and aspirations. Regardless of where you work within our organisation, your initiative, talent, ideas, and energy all contribute to the impact that we can make with our work. Together we can achieve great things. Job Title: Senior Software Engineer – Data Modernization (GenAI) Location: Manyata Tech Park, Bangalore (Hybrid) Business & Team: CommSec is Australia's largest online retail stockbroker. It is one of the most highly visible and visited online assets in Australian financial services. CommSec’s systems utilise a variety of technologies and support a broad range of investors. Engineers within CommSec are offered regular opportunities to work on some of the finest IT systems in Australia, as well as having opportunity to develop careers across different functions and teams within the wider Bank. Impact & Contribution: Apply core concepts, technology and domain expertise to effectively develop software solutions to meet business needs. You will contribute to building the brighter future for all by ensuring that our team builds the best solutions possible using modern development practices that ensure both functional and non-functional needs are met. If you have a history of building a culture of empowerment and know what it takes to be a force multiplier within a large organization, then you’re the kind of person we are looking for. You will report to the Lead Engineer within Business Banking Technology. Roles & Responsibilities: Build scalable agentic AI solutions that integrate with existing systems and support business objectives. Implement MLOps pipelines Design and conduct experiments to evaluate model performance and iteratively refine models based on findings. Hands on experience in automated LLM outcome validation and metrication of AI outputs. Good knowledge of ethical AI practices and tools to implement. Hand-on experience in AWS cloud services such as SNS, SQS, Lambda. Experience in big data platform technologies such as to Spark framework and Vector DB. Collaborate with Software engineers to deploy AI models in production environments, ensuring robustness and scalability. Participate in research initiatives to explore new AI models and methodologies that can be applied to current and future products. Develop and implement monitoring systems to track the performance of AI models in production. Hands on DevSecOps experience including continuous integration/continuous deployment, security practices. Essential Skills: The AI Engineer will involve in the development and deployment of advanced AI and machine learning models. The ideal candidate is highly skilled in MLOps and software engineering, with a strong track record of developing AI models and deploying them in production environments. 7+ years' experience RAG, Prompt Engineering Vector DB, Dynamo DB, Redshift Spark framework, Parquet, Iceberg Python MLOps Langfuse, LlamaIndex, MLflow, Gleu, Bleu AWS cloud services such as SNS, SQS, Lambda Traditional Machine Learning Education Qualifications: Bachelor’s degree or Master's Degree in engineering in Information Technology. If you're already part of the Commonwealth Bank Group (including Bankwest, x15ventures), you'll need to apply through Sidekick to submit a valid application. We’re keen to support you with the next step in your career. We're aware of some accessibility issues on this site, particularly for screen reader users. We want to make finding your dream job as easy as possible, so if you require additional support please contact HR Direct on 1800 989 696. Advertising End Date: 06/08/2025
Posted 2 weeks ago
0.0 years
0 Lacs
Bengaluru
On-site
Data Engineer -1 (Experience – 0-2 years) What we offer Our mission is simple – Building trust. Our customer's trust in us is not merely about the safety of their assets but also about how dependable our digital offerings are. That’s why, we at Kotak Group are dedicated to transforming banking by imbibing a technology-first approach in everything we do, with an aim to enhance customer experience by providing superior banking services. We welcome and invite the best technological minds in the country to come join us in our mission to make banking seamless and swift. Here, we promise you meaningful work that positively impacts the lives of many. About our team DEX is a central data org for Kotak Bank which manages entire data experience of Kotak Bank. DEX stands for Kotak’s Data Exchange. This org comprises of Data Platform, Data Engineering and Data Governance charter. The org sits closely with Analytics org. DEX is primarily working on greenfield project to revamp entire data platform which is on premise solutions to scalable AWS cloud-based platform. The team is being built ground up which provides great opportunities to technology fellows to build things from scratch and build one of the best-in-class data lake house solutions. The primary skills this team should encompass are Software development skills preferably Python for platform building on AWS; Data engineering Spark (pyspark, sparksql, scala) for ETL development, Advanced SQL and Data modelling for Analytics. The org size is expected to be around 100+ member team primarily based out of Bangalore comprising of ~10 sub teams independently driving their charter. As a member of this team, you get opportunity to learn fintech space which is most sought-after domain in current world, be a early member in digital transformation journey of Kotak, learn and leverage technology to build complex data data platform solutions including, real time, micro batch, batch and analytics solutions in a programmatic way and also be futuristic to build systems which can be operated by machines using AI technologies. The data platform org is divided into 3 key verticals: Data Platform This Vertical is responsible for building data platform which includes optimized storage for entire bank and building centralized data lake, managed compute and orchestrations framework including concepts of serverless data solutions, managing central data warehouse for extremely high concurrency use cases, building connectors for different sources, building customer feature repository, build cost optimization solutions like EMR optimizers, perform automations and build observability capabilities for Kotak’s data platform. The team will also be center for Data Engineering excellence driving trainings and knowledge sharing sessions with large data consumer base within Kotak. Data Engineering This team will own data pipelines for thousands of datasets, be skilled to source data from 100+ source systems and enable data consumptions for 30+ data analytics products. The team will learn and built data models in a config based and programmatic and think big to build one of the most leveraged data model for financial orgs. This team will also enable centralized reporting for Kotak Bank which cuts across multiple products and dimensions. Additionally, the data build by this team will be consumed by 20K + branch consumers, RMs, Branch Managers and all analytics usecases. Data Governance The team will be central data governance team for Kotak bank managing Metadata platforms, Data Privacy, Data Security, Data Stewardship and Data Quality platform. If you’ve right data skills and are ready for building data lake solutions from scratch for high concurrency systems involving multiple systems then this is the team for you. You day to day role will include Drive business decisions with technical input and lead the team. Design, implement, and support an data infrastructure from scratch. Manage AWS resources, including EC2, EMR, S3, Glue, Redshift, and MWAA. Extract, transform, and load data from various sources using SQL and AWS big data technologies. Explore and learn the latest AWS technologies to enhance capabilities and efficiency. Collaborate with data scientists and BI engineers to adopt best practices in reporting and analysis. Improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers. Build data platforms, data pipelines, or data management and governance tools. BASIC QUALIFICATIONS for Data Engineer/ SDE in Data Bachelor's degree in Computer Science, Engineering, or a related field Experience in data engineering Strong understanding of AWS technologies, including S3, Redshift, Glue, and EMR Experience with data pipeline tools such as Airflow and Spark Experience with data modeling and data quality best practices Excellent problem-solving and analytical skills Strong communication and teamwork skills Experience in at least one modern scripting or programming language, such as Python, Java, or Scala Strong advanced SQL skills PREFERRED QUALIFICATIONS AWS cloud technologies: Redshift, S3, Glue, EMR, Kinesis, Firehose, Lambda, IAM, Airflow Prior experience in Indian Banking segment and/or Fintech is desired. Experience with Non-relational databases and data stores Building and operating highly available, distributed data processing systems for large datasets Professional software engineering and best practices for the full software development life cycle Designing, developing, and implementing different types of data warehousing layers Leading the design, implementation, and successful delivery of large-scale, critical, or complex data solutions Building scalable data infrastructure and understanding distributed systems concepts SQL, ETL, and data modelling Ensuring the accuracy and availability of data to customers Proficient in at least one scripting or programming language for handling large volume data processing Strong presentation and communications skills.
Posted 2 weeks ago
8.0 years
4 - 6 Lacs
Bengaluru
On-site
Company Description At Nielsen, we are passionate about our work to power a better media future for all people by providing powerful insights that drive client decisions and deliver extraordinary results. Our talented, global workforce is dedicated to capturing audience engagement with content - wherever and whenever it’s consumed. Together, we are proudly rooted in our deep legacy as we stand at the forefront of the media revolution. When you join Nielsen, you will join a dynamic team committed to excellence, perseverance, and the ambition to make an impact together. We champion you, because when you succeed, we do too. We enable your best to power our future. Job Description About the Role Nielsen is seeking an organized, detail oriented, team player, to join the ITAM Back Office Engineering team in the role of Software Engineer. Nielsen’s Audience Measurement Engineering platforms support the measurement of television viewing in more than 30 countries around the world. Ideal candidates will have exceptional skills in programming, testing, debugging and problem solving as well as effective communication and writing skills. Qualifications Responsibilities System Deployment: Conceive, design and build new features in the existing backend processing pipelines. CI/CD Implementation: Design and implement CI/CD pipelines for automated build, test, and deployment processes. Ensure continuous integration and delivery of features, improvements, and bug fixes. Code Quality and Best Practices: Enforce coding standards, best practices, and design principles. Conduct code reviews and provide constructive feedback to maintain high code quality. Performance Optimization: Identify and address performance bottlenecks in both reading, processing and writing data to the backend data stores. Mentorship and Collaboration: Mentor junior engineers, providing guidance on technical aspects and best practices. Collaborate with cross-functional teams to ensure a cohesive and unified approach to software development. Security and Compliance: Implement security best practices for all tiers of the system. Ensure compliance with industry standards and regulations related to AWS platform security. Key Skills Bachelor's or Master’s degree in Computer Science, Software Engineering, or a related field. Proven experience, minimum 8 years, in high-volume data processing development expertise using ETL tools such as AWS Glue or PySpark, Python , SQL and databases such as Postgres Experience in development on an AWS platform Strong understanding of CI/CD principles and tools. GitLab a plus Excellent problem-solving and debugging skills. Strong communication and collaboration skills with ability to communicate complex technical concepts and align organization on decisions Sound problem-solving skills with the ability to quickly process complex information and present it clearly and simply Utilizes team collaboration to create innovative solutions efficiently Other desirable skills Knowledge of networking principles and security best practices. AWS certifications Experience with Data Warehouses, ETL, and/or Data Lakes very desirable Experience with RedShift, Airflow, Python, Lambda, Prometheus, Grafana, & OpsGeni a bonus Exposure to the Google Cloud Platform (GCP Additional Information Please be aware that job-seekers may be at risk of targeting by scammers seeking personal data or money. Nielsen recruiters will only contact you through official job boards, LinkedIn, or email with a nielsen.com domain. Be cautious of any outreach claiming to be from Nielsen via other messaging platforms or personal email addresses. Always verify that email communications come from an @nielsen.com address. If you're unsure about the authenticity of a job offer or communication, please contact Nielsen directly through our official website or verified social media channels.
Posted 2 weeks ago
2.0 years
2 - 9 Lacs
Bengaluru
On-site
Updraft. Helping you make changes that pay off. Updraft is an award winning, FCA-authorised, high-growth fintech based in London. Our vision is to revolutionise the way people spend and think about money, by automating the day to day decisions involved in managing money and mainstream borrowings like credit cards, overdrafts and other loans. A 360 degree spending view across all your financial accounts (using Open banking) A free credit report with tips and guidance to help improve your credit score Native AI led personalised financial planning to help users manage money, pay off their debts and improve their credit scores Intelligent lending products to help reduce cost of credit We have built scale and are getting well recognised in the UK fintech ecosystem. 800k+ users of the mobile app that has helped users swap c £500 m of costly credit-card debt for smarter credit, putting hundreds of thousands on a path to better financial health The product is highly rated by our customers. We are rated 4.8 on Trustpilot, 4.8 on the Play Store, and 4.4 on the iOS Store We are selected for Technation Future Fifty 2025 - a program that recognizes and supports successful and innovative scaleups to IPOs - 30% of UK unicorns have come out of this program. Updraft once again featured on the Sifted 100 UK startups - among only 25 companies to have made the list over both years 2024 and 2025 We are looking for exceptional talent to join us on our next stage of growth with a compelling proposition - purpose you can feel, impact you can measure, and ownership you'll actually hold. Expect a hybrid, London-hub culture where cross-functional squads tackle real-world problems with cutting-edge tech; generous learning budgets and wellness benefits; and the freedom to experiment, ship, and see your work reflected in customers' financial freedom. At Updraft, you'll help build a fairer credit system. Role and Responsibilities Join our Analytics team to deliver cutting edge solutions. Support business and operation teams on making better data driven decisions by ingesting new data sources, creating intuitive dashboards and producing data insights Build new data processing workflows to extract data from core systems for analytic products Maintain and improve existing data processing workflows. Contribute to optimizing and maintaining the production data pipelines, including system and process improvements Contribute to the development of analytical products and dashboards with integration of internal and third-party data sources/ APIs Contribute to cataloguing and documentation of data Requirements Bachelor’s degree in mathematics, statistics, computer science or related field 2-5 years experience working in data engineering/analyst and related fields Advanced analytical framework and experience relating data insight with business problems and creating appropriate dashboards Mandatory required high proficiency in ETL, SQL and database management Experience with AWS services like Glue, Athena, Redshift, Lambda, S3 Python programming experience using data libraries like pandas and numpy etc Interest in machine learning, logistic regression and emerging solutions for data analytics You are comfortable working without direct supervision on outcomes that have a direct impact on the business You are curious about the data and have a desire to ask "why?" Good to have but not mandatory required: Experience in startup or fintech will be considered a great advantage Awareness or Hands-on experience with ML-AI implementation or ML-Ops Certification in AWS foundation Benefits Opportunities to Take Ownership – Work on high-impact projects with real autonomy. Fast Career Growth – Gain exposure to multiple business areas and advance quickly. Be at the Forefront of Innovation – Work on cutting-edge technologies or disruptive ideas. Collaborative & Flat Hierarchy – Work closely with leadership and have a real voice. Dynamic, Fast-Paced Environment – No two days are the same; challenge yourself every day. A Mission-Driven Company – Be part of something that makes a difference
Posted 2 weeks ago
6.0 years
30 Lacs
Chennai
Remote
We Are Hiring | Data Engineer (6+ Years) Happy to connect with you all from #LINCHPINZ! We have an exciting opportunity for a Data Engineer to join our growing team. If you’re passionate about data and have experience working with modern cloud and big data technologies, we’d love to hear from you! Role: Data Engineer Experience: 6+ Years Location: Chennai / Remote Job Description: We are seeking a highly skilled Senior Data Engineer with over 6 years of experience in designing, building, and maintaining large-scale data pipelines and infrastructure. You should have hands-on experience in: Cloud platforms – #Azure or #AWS Big Data Technologies – #Hadoop, #Hive Programming and Querying – #SQL, #Python, #PySpark Job Type: Full-time Pay: Up to ₹3,000,000.00 per year Application Question(s): We have opening for Pune, Bangalore, Chennai and Hyderabad location. Experience: Data Engineer: 6 years (Required) SQL: 4 years (Required) AWS glue: 4 years (Required) Pyspark: 4 years (Required) Python: 4 years (Required) Redshift: 4 years (Required) Location: Chennai, Tamil Nadu (Preferred) Work Location: Remote
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France