Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 - 10.0 years
0 Lacs
Delhi, India
On-site
About Veersa - Veersa Technologies is a US-based IT services and AI enablement company founded in 2020, with a global delivery center in Noida (Sector 142). Founded by industry leaders with an impressive 85% YoY growth A profitable company since inception Team strength: Almost 400 professionals and growing rapidly Our Services Include Digital & Software Solutions: Product Development, Legacy Modernization, Support Data Engineering & AI Analytics: Predictive Analytics, AI/ML Use Cases, Data Visualization Tools & Accelerators: AI/ML-embedded tools that integrate with client systems Tech Portfolio Assessment: TCO analysis, modernization roadmaps, etc. Tech Stack - * AI/ML, IoT, Blockchain, MEAN/MERN stack, Python, GoLang, RoR, Java Spring Boot, Node.js Databases: PostgreSQL, MySQL, MS SQL, Oracle Cloud: AWS & Azure (Serverless Architecture) Website: https://veersatech.com LinkedIn: Feel free to explore our company profile About The Role We are seeking a highly skilled and experienced Data Engineer & Lead Data Engineer to join our growing data team. This role is ideal for professionals with 2 to 10 years of experience in data engineering, with a strong foundation in SQL, Databricks, Spark SQL, PySpark, and BI tools like Power BI or Tableau. As a Data Engineer, you will be responsible for building scalable data pipelines, optimizing data processing workflows, and enabling insightful reporting and analytics across the organization. Key Responsibilities Design and develop robust, scalable data pipelines using PySpark and Databricks. Write efficient SQL and Spark SQL queries for data transformation and analysis. Work closely with BI teams to enable reporting through Power BI or Tableau. Optimize performance of big data workflows and ensure data quality. Collaborate with business and technical stakeholders to gather and translate data requirements. Implement best practices for data integration, processing, and governance. Required Qualifications Bachelor’s degree in Computer Science, Engineering, or a related field. 2–10 years of experience in data engineering or a similar role. Strong experience with SQL, Spark SQL, and PySpark. Hands-on experience with Databricks for big data processing. Proven experience with BI tools such as Power BI and/or Tableau. Strong understanding of data warehousing and ETL/ELT concepts. Good problem-solving skills and the ability to work in cross-functional teams. Nice To Have Experience with cloud data platforms (Azure, AWS, or GCP). Familiarity with CI/CD pipelines and version control tools (e.g., Git). Understanding of data governance, security, and compliance standards. Exposure to data lake architectures and real-time streaming data pipelines. Show more Show less
Posted 6 days ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
TJX Companies At TJX Companies, every day brings new opportunities for growth, exploration, and achievement. You’ll be part of our vibrant team that embraces diversity, fosters collaboration, and prioritizes your development. Whether you’re working in our four global Home Offices, Distribution Centers or Retail Stores—TJ Maxx, Marshalls, Homegoods, Homesense, Sierra, Winners, and TK Maxx, you’ll find abundant opportunities to learn, thrive, and make an impact. Come join our TJX family—a Fortune 100 company and the world’s leading off-price retailer. Job Description About TJX: At TJX, is a Fortune 100 company that operates off-price retailers of apparel and home fashions. TJX India - Hyderabad is the IT home office in the global technology organization of off-price apparel and home fashion retailer TJX, established to deliver innovative solutions that help transform operations globally. At TJX, we strive to build a workplace where our Associates’ contributions are welcomed and are embedded in our purpose to provide excellent value to our customers every day. At TJX India, we take a long-term view of your career. We have a high-performance culture that rewards Associates with career growth opportunities, preferred assignments, and upward career advancement. We take well-being very seriously and are committed to offering a great work-life balance for all our Associates. What You’ll Discover Inclusive culture and career growth opportunities A truly Global IT Organization that collaborates across North America, Europe, Asia and Australia, click here to learn more Challenging, collaborative, and team-based environment What You’ll Do The Global Supply Chain - Logistics Team is responsible for managing various supply chain logistics related solutions within TJX IT. The organization delivers capabilities that enrich the customer experience and provide business value. We seek a motivated, talented Staff Engineer with good understanding of cloud base, database and BI concepts to help architect enterprise reporting solutions across global buying, planning and allocations. What You’ll Need The Global Supply Chain - Logistics Team thrives on strong relationships with our business partners and working diligently to address their needs which supports TJX growth and operational stability. On this tightly knit and fast-paced solution delivery team you will be constantly challenged to stretch and think outside the box. You will be working with product teams, architecture and business partners to strategically plan and deliver the product features by connecting the technical and business worlds. You will need to break down complex problems into steps that drive product development while keeping product quality and security as the priority. You will be responsible for most architecture, design and technical decisions within the assigned scope. Key Responsibilities Design, develop, test and deploy AI solutions using Azure AI services to meet business requirements, working collaboratively with architects and other engineers. Train, fine-tune, and evaluate AI models, including large language models (LLMs), ensuring they meet performance criteria and integrate seamlessly into new or existing solutions. Develop and integrate APIs to enable smooth interaction between AI models and other applications, facilitating efficient model serving. Collaborate effectively with cross-functional teams, including data scientists, software engineers, and business stakeholders, to deliver comprehensive AI solutions. Optimize AI and ML model performance through techniques such as hyperparameter tuning and model compression to enhance efficiency and effectiveness. Monitor and maintain AI systems, providing technical support and troubleshooting to ensure continuous operation and reliability. Create comprehensive documentation for AI solutions, including design documents, user guides, and operational procedures, to support development and maintenance. Stay updated with the latest advancements in AI, machine learning, and cloud technologies, demonstrating a commitment to continuous learning and improvement. Design, code, deploy, and support software components, working collaboratively with AI architects and engineers to build impactful systems and services. Lead medium complex initiatives, prioritizing and assigning tasks, providing guidance, and resolving issues to ensure successful project delivery. Minimum Qualifications Bachelor's degree in computer science, engineering, or related field 8+ years of experience in data/software engineering, design, implementation and architecture. At least 5+ years of hands-on experience in developing AI/ML solutions, with a focus on deploying them in a cloud environment. Deep understanding of AI and ML algorithms with focus on Operations Research / Optimization knowledge (preferably Metaheuristics / Genetic Algorithms). Strong programming skills in Python with advanced OOPS concepts. Good understanding of structured, semi structured, and unstructured data, Data modelling, Data analysis, ETL and ELT. Proficiency with Databricks & PySpark. Experience with MLOps practices including CI/CD for machine learning models. Knowledge of security best practices for deploying AI solutions, including data encryption and access control. Knowledge of ethical considerations in AI, including bias detection and mitigation strategies. This role operates in an Agile/Scrum environment and requires a solid understanding of the full software lifecycle, including functional requirement gathering, design and development, testing of software applications, and documenting requirements and technical specifications. Fully Owns Epics with decreasing guidance. Takes initiative through identifying gaps and opportunities. Strong communication and influence skills. Solid team leadership with mentorship skills Ability to understand the work environment and competing priorities in conjunction with developing/meeting project goals. Shows a positive, open-minded, and can-do attitude. Experience In The Following Technologies Advanced Python programming (OOPS) Operations Research / Optimization knowledge (preferably Metaheuristics / Genetic Algorithms) Databricks with Pyspark Azure / Cloud knowledge Github / version control Functional knowledge on Supply Chain / Logistics is preferred. In addition to our open door policy and supportive work environment, we also strive to provide a competitive salary and benefits package. TJX considers all applicants for employment without regard to race, color, religion, gender, sexual orientation, national origin, age, disability, gender identity and expression, marital or military status, or based on any individual's status in any group or class protected by applicable federal, state, or local law. TJX also provides reasonable accommodations to qualified individuals with disabilities in accordance with the Americans with Disabilities Act and applicable state and local law. Address Salarpuria Sattva Knowledge City, Inorbit Road Location: APAC Home Office Hyderabad IN Show more Show less
Posted 6 days ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About WorkSpan The next era of growth is being driven by business interoperability. Cloud, genAI, solutions combining services and software- more and more, companies outpace their competition not just through building superior products, but by creating stronger partnerships, paths to market, and better business models for winning together. Cloud providers, service providers, tech partners and resellers are teaming up to win more deals together through co-selling. WorkSpan is building the world’s largest, trusted co-selling network. WorkSpan already has seven of the world’s ten largest partner ecosystems on our platform and $50B of customer pipeline under active management. AWS, Google, Microsoft, MongoDB, PagerDuty, Databricks and dozens of others trust WorkSpan to accelerate and amplify their ecosystem strategies. With a $30M series C and backing from world class investors Insight Partners, Mayfield, and M12, WorkSpan is poised to drive the future of B2B. Come be a part of it. Join Our Team For The Opportunity To Own your results and make a tangible impact on the business Develop a deep understanding of GTM working closely with leadership across sales & marketing Work with driven, passionate people every day Be a part of an ambitious, supportive team on a mission Be the end to end owner of your product Work closely with Marketing, Sales, Pre-Sales, Professional Services, Customer Success, Support, Product, and Engineering teams to support the expansion and deliver solutions for our customers Engage with and evaluate the needs of our customers and partners to discover important problems and opportunities that can be translated into the right, compelling products and features Be the voice of the customer and help provide context, empathy, and rationale behind customer needs. Less “customer wants a feature”, more “they want to solve a problem that helps with…” Create, maintain and socialize a high-level business strategy roadmap through meaningful collaboration and ruthless data driven prioritization Deliver product demos and presentations for your product to existing and prospective customers. Ensure competitive differentiation. Collaborate with UX and Engineering to deliver product delight to the market. Be passionate about building a winning product, be driven to make customers successful, and be resourceful to thrive in a startup environment. Have a bias toward action and iterate to deliver the optimal product. Your Primary Responsibilities Work closely with the management team to identify the target segment, the problems to solve and the differentiated value-propositions Collaborate effectively within the team and with design, engineering, sales, marketing, growth, and customer success. Lead product initiatives to ensure timely and high-quality delivery of the product to the market. Engage with customers and with customer-facing teams to effectively market and sell the product. Engage in user research to solidify concepts and impact growth. Have a data-oriented approach to making the tradeoffs that are core to the product management function Skills And Qualifications We Are Looking For 3+ years of product management experience working on SaaS offering, preferably with Enterprise CRM or ERP Products 1+ years Development experience is an added advantage Proven experience in building B2B SAAS applications and API development Excellent presentation skills, including strong oral and written capabilities; ability to clearly communicate compelling messages to internal and external audiences Experience operating in a hyper-growth / entrepreneurial environment and scaling new and emerging products from the ground up Company Perks & Benefits 💰Competitive salary, equity, and performance bonus 🏖 Unlimited vacation 🤕 Paid sick leave 💻 Latest MacBook 🏥Medical insurance 🏋️ Monthly Wellness Stipend 🍼 Paid maternity and paternity leave Why join us? 💡 We created the fast-growing Ecosystem Business Management category 🚀 We're growing rapidly, and the sky’s the limit - we just raised a Series C to help us expand 🦄 We've built an extremely efficient go-to-market engine 🥇 Work with a talented team you'll learn a lot from 🙏 We care about delivering value to our awesome customers 🗣️ We are flexible in our opinions and always open to new ideas. 💡 We innovate continuously, with a focus on long-term success 🌍 We know it takes people with different ideas, strengths, backgrounds, cultures, weaknesses, opinions, and interests to make our company succeed. We celebrate our differences and are lucky to have teammates worldwide. 🤝 Buddy system: It's dangerous to go alone, so we got you a buddy 🙌. In some realms, they use the term mentor, but we don't think that is a good description. Your buddy will be mentoring you, but he/she will also be your friend and your first point of contact during the onboarding period. Other Cool Things About WorkSpan ❓ What is WorkSpan? https://www.workspan.com/what-is-workspan/ 💙 Our values : https://www.workspan.com/careers/ 🔊 Videos of events and customer speakers: https://www.youtube.com/c/WorkSpan/videos 🆕 Latest updates from WorkSpan : https://www.linkedin.com/company/workspan/posts/ WorkSpan ensures equal employment opportunity without discrimination or harassment based on race, color, religion, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender identity or expression, age, disability, national origin, marital or domestic/civil partnership status, genetic information, citizenship status, veteran status, or any other characteristic protected by law. Show more Show less
Posted 6 days ago
25.0 years
0 Lacs
Kochi, Kerala, India
On-site
Company Overview Milestone Technologies is a global IT managed services firm that partners with organizations to scale their technology, infrastructure and services to drive specific business outcomes such as digital transformation, innovation, and operational agility. Milestone is focused on building an employee-first, performance-based culture and for over 25 years, we have a demonstrated history of supporting category-defining enterprise clients that are growing ahead of the market. The company specializes in providing solutions across Application Services and Consulting, Digital Product Engineering, Digital Workplace Services, Private Cloud Services, AI/Automation, and ServiceNow. Milestone culture is built to provide a collaborative, inclusive environment that supports employees and empowers them to reach their full potential. Our seasoned professionals deliver services based on Milestone’s best practices and service delivery framework. By leveraging our vast knowledge base to execute initiatives, we deliver both short-term and long-term value to our clients and apply continuous service improvement to deliver transformational benefits to IT. With Intelligent Automation, Milestone helps businesses further accelerate their IT transformation. The result is a sharper focus on business objectives and a dramatic improvement in employee productivity. Through our key technology partnerships and our people-first approach, Milestone continues to deliver industry-leading innovation to our clients. With more than 3,000 employees serving over 200 companies worldwide, we are following our mission of revolutionizing the way IT is deployed. Job Overview In this vital role you will be responsible for the development and implementation of our data strategy. The ideal candidate possesses a strong blend of technical expertise and data-driven problem-solving skills. As a Data Engineer, you will play a crucial role in building, and optimizing our data pipelines and platforms in a SAFE Agile product team. Chip in to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions. Deliver for data pipeline projects from development to deployment, managing, timelines, and risks. Ensure data quality and integrity through meticulous testing and monitoring. Leverage cloud platforms (AWS, Databricks) to build scalable and efficient data solutions. Work closely with product team, and key collaborators to understand data requirements. Enforce to data engineering industry standards and standards. Experience developing in an Agile development environment, and comfortable with Agile terminology and ceremonies. Familiarity with code versioning using GIT and code migration tools. Familiarity with JIRA. Stay up to date with the latest data technologies and trends Basic Qualifications What we expect of you Doctorate degree OR Master’s degree and 4 to 6 years of Information Systems experience OR Bachelor’s degree and 6 to 8 years of Information Systems experience OR Diploma and 10 to 12 years of Information Systems experience. Demonstrated hands-on experience with cloud platforms (AWS, Azure, GCP) Proficiency in Python, PySpark, SQL. Development knowledge in Databricks. Good analytical and problem-solving skills to address sophisticated data challenges. Preferred Qualifications Experienced with data modeling Experienced working with ETL orchestration technologies Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and DevOps Familiarity with SQL/NOSQL database Soft Skills Skilled in breaking down problems, documenting problem statements, and estimating efforts. Effective communication and interpersonal skills to collaborate with multi-functional teams. Excellent analytical and problem solving skills. Strong verbal and written communication skills Ability to work successfully with global teams High degree of initiative and self-motivation. Team-oriented, with a focus on achieving team goals Compensation Estimated Pay Range: Exact compensation and offers of employment are dependent on circumstances of each case and will be determined based on job-related knowledge, skills, experience, licenses or certifications, and location. Our Commitment to Diversity & Inclusion At Milestone we strive to create a workplace that reflects the communities we serve and work with, where we all feel empowered to bring our full, authentic selves to work. We know creating a diverse and inclusive culture that champions equity and belonging is not only the right thing to do for our employees but is also critical to our continued success. Milestone Technologies provides equal employment opportunity for all applicants and employees. All qualified applicants will receive consideration for employment and will not be discriminated against on the basis of race, color, religion, gender, gender identity, marital status, age, disability, veteran status, sexual orientation, national origin, or any other category protected by applicable federal and state law, or local ordinance. Milestone also makes reasonable accommodations for disabled applicants and employees. We welcome the unique background, culture, experiences, knowledge, innovation, self-expression and perspectives you can bring to our global community. Our recruitment team is looking forward to meeting you. Show more Show less
Posted 6 days ago
7.0 years
0 Lacs
Indore, Madhya Pradesh, India
On-site
ECI is the leading global provider of managed services, cybersecurity, and business transformation for mid-market financial services organizations across the globe. From its unmatched range of services, ECI provides stability, security and improved business performance, freeing clients from technology concerns and enabling them to focus on running their businesses. More than 1,000 customers worldwide with over $3 trillion of assets under management put their trust in ECI. At ECI, we believe success is driven by passion and purpose. Our passion for technology is only surpassed by our commitment to empowering our employees around the world . The Opportunity: ECI has an exciting Opportunity for Cloud Data Engineer. The full time position is open for an experienced Sr DataEngineer that will support several of our clients systems. Client satisfaction is our primary objective; all available positions are customer facing requiring EXCELLENT communication and people skills. A positive attitude, rigorous work habits and professionalism in the work place are a must. Fluency in English, both written and verbal are required. This is an Onsite role. What you will do: A senior cloud data engineer with 7+ years of experience Strong knowledge and hands on experience with Azure data services such as Azure Data Factory, Azure Synapse Analytics, Azure SQL Database, Azure Data Lake, Logic apps, Azure Synapse Analytics, Apache spark and Snowflake Datawarehouse, Azure Fabric Good to have Azure Databricks, Azure Cosmos DB, etc, Azure AI Must have experience in developing could base application. Should be able to analyze problem and provide solution. Experience in designing, implementing, and managing data warehouse solutions using Azure Synapse Analytics or similar technologies. Experience in migrating the data from On-Premises to Cloud. Proficiency in data modeling techniques and experience in designing and implementing complex data models. Experience in designing and developing ETL/ELT processes to move data between systems and transform data for analytics. Strong programming skills in languages such as SQL, Python, or Scala, with experience in developing and maintaining data pipelines. Experience in at least one of the reporting tools such as Power BI / Tableau Ability to work effectively in a team environment and communicate complex technical concepts to non-technical stakeholders. Experience in managing and optimizing databases, including performance tuning, troubleshooting, and capacity planning Understand business requirements and convert them to technical design for implementation. Understand business requirement, perform analysis and develop and test code. Design and develop could base application using Python on serverless framework Strong communication, analytical, and troubleshoot skills Create, maintain and enhance applications Work independently as individual contributor with minimum or no help. Follow Agile Methodology (SCRUM Who you are: Experience in developing could base data application. Hands on in Azure data services, data warehousing, ETL etc Understanding of cloud architecture principles and best practices, including scalability, high availability, disaster recovery, and cost optimization, with a focus on designing data solutions for the cloud. Experience in developing pipelines using ADF, Synapse. Hands on experience in migrating data from On_premises to cloud. Strong experience in writing the complex SQL Scripts, transformations. Able to analyze problem and provide solution. Knolwedge in CI/CD pipelines is plus. Knowledge in Python and API Gateway is an added advantage Bonus (Nice to have): Product Management/BA experience nice to have. ECI’s culture is all about connection - connection with our clients, our technology and most importantly with each other. In addition to working with an amazing team around the world, ECI also offers a competitive compensation package and so much more! If you believe you would be a great fit and are ready for your best job ever, we would like to hear from you! Love Your Job, Share Your Technology Passion, Create Your Future Here! Show more Show less
Posted 6 days ago
8.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Position Overview: We are seeking a skilled Big Data Developer to join our growing delivery team, with a dual focus on hands-on project support and mentoring junior engineers. This role is ideal for a developer who not only thrives in a technical, fast-paced environment but is also passionate about coaching and developing the next generation of talent. You will work on live client projects, provide technical support, contribute to solution delivery, and serve as a go-to technical mentor for less experienced team members. Key Responsibilities: • Perform hands-on Big Data development work, including coding, testing, troubleshooting, and deploying solutions. • Support ongoing client projects, addressing technical challenges and ensuring smooth delivery. • Collaborate with junior engineers to guide them on coding standards, best practices, debugging, and project execution. • Review code and provide feedback to junior engineers to maintain high quality and scalable solutions. • Assist in designing and implementing solutions using Hadoop, Spark, Hive, HDFS, and Kafka. • Lead by example in object-oriented development, particularly using Scala and Java. • Translate complex requirements into clear, actionable technical tasks for the team. • Contribute to the development of ETL processes for integrating data from various sources. • Document technical approaches, best practices, and workflows for knowledge sharing within the team. Required Skills and Qualifications: • 8+ years of professional experience in Big Data development and engineering. • Strong hands-on expertise with Hadoop, Hive, HDFS, Apache Spark, and Kafka. • Solid object-oriented development experience with Scala and Java. • Strong SQL skills with experience working with large data sets. • Practical experience designing, installing, configuring, and supporting Big Data clusters. • Deep understanding of ETL processes and data integration strategies. • Proven experience mentoring or supporting junior engineers in a team setting. • Strong problem-solving, troubleshooting, and analytical skills. • Excellent communication and interpersonal skills. Preferred Qualifications: • Professional certifications in Big Data technologies (Cloudera, Databricks, AWS Big Data Specialty, etc.). • Experience with cloud Big Data platforms (AWS EMR, Azure HDInsight, or GCP Dataproc). • Exposure to Agile or DevOps practices in Big Data project environments. What We Offer: Opportunity to work on challenging, high-impact Big Data projects. Leadership role in shaping and mentoring the next generation of engineers. Supportive and collaborative team culture. Flexible working environment Competitive compensation and professional growth opportunities. Show more Show less
Posted 6 days ago
15.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Your Team Responsibilities MSCI has an immediate opening in of our fastest growing product lines. As a Lead Architect within Sustainability and Climate, you are an integral part of a team that works to develop high-quality architecture solutions for various software applications on modern cloud-based technologies. As a core technical contributor, you are responsible for conducting critical architecture solutions across multiple technical areas to support project goals. The systems under your responsibility will be amongst the most mission critical systems of MSCI. They require strong technology expertise and a strong sense of enterprise system design, state-of-the-art scalability and reliability but also innovation. Your ability to take technology decisions in a consistent framework to support the growth of our company and products, lead the various software implementations in close partnerships with global leaders and multiple product organizations and drive the technology innovations will be the key measures of your success in our dynamic and rapidly growing environment. At MSCI, you will be operating in a culture where we value merit and track record. You will own the full life-cycle of the technology services and provide management, technical and people leadership in the design, development, quality assurance and maintenance of our production systems, making sure we continue to scale our great franchise. Your Skills And Experience That Will Help You Excel Prior senior Software Architecture roles Demonstrate proficiency in programming languages such as Python/Java/Scala and knowledge of SQL and NoSQL databases. Drive the development of conceptual, logical, and physical data models aligned with business requirements. Lead the implementation and optimization of data technologies, including Apache Spark. Experience with one of the table formats, such as Delta, Iceberg. Strong hands-on experience in data architecture, database design, and data modeling. Proven experience as a Data Platform Architect or in a similar role, with expertise in Airflow, Databricks, Snowflake, Collibra, and Dremio. Experience with cloud platforms such as AWS, Azure, or Google Cloud. Ability to dive into details, hands on technologist with strong core computer science fundamentals. Strong preference for financial services experience Proven leadership of large-scale distributed software teams that have delivered great products on deadline Experience in a modern iterative software development methodology Experience with globally distributed teams and business partners Experience in building and maintaining applications that are mission critical for customers M.S. in Computer Science, Management Information Systems or related engineering field 15+ years of software engineering experience Demonstrated consensus builder and collegial peer About MSCI What we offer you Transparent compensation schemes and comprehensive employee benefits, tailored to your location, ensuring your financial security, health, and overall wellbeing. Flexible working arrangements, advanced technology, and collaborative workspaces. A culture of high performance and innovation where we experiment with new ideas and take responsibility for achieving results. A global network of talented colleagues, who inspire, support, and share their expertise to innovate and deliver for our clients. Global Orientation program to kickstart your journey, followed by access to our Learning@MSCI platform, LinkedIn Learning Pro and tailored learning opportunities for ongoing skills development. Multi-directional career paths that offer professional growth and development through new challenges, internal mobility and expanded roles. We actively nurture an environment that builds a sense of inclusion belonging and connection, including eight Employee Resource Groups. All Abilities, Asian Support Network, Black Leadership Network, Climate Action Network, Hola! MSCI, Pride & Allies, Women in Tech, and Women’s Leadership Forum. At MSCI we are passionate about what we do, and we are inspired by our purpose – to power better investment decisions. You’ll be part of an industry-leading network of creative, curious, and entrepreneurial pioneers. This is a space where you can challenge yourself, set new standards and perform beyond expectations for yourself, our clients, and our industry. MSCI is a leading provider of critical decision support tools and services for the global investment community. With over 50 years of expertise in research, data, and technology, we power better investment decisions by enabling clients to understand and analyze key drivers of risk and return and confidently build more effective portfolios. We create industry-leading research-enhanced solutions that clients use to gain insight into and improve transparency across the investment process. MSCI Inc. is an equal opportunity employer. It is the policy of the firm to ensure equal employment opportunity without discrimination or harassment on the basis of race, color, religion, creed, age, sex, gender, gender identity, sexual orientation, national origin, citizenship, disability, marital and civil partnership/union status, pregnancy (including unlawful discrimination on the basis of a legally protected parental leave), veteran status, or any other characteristic protected by law. MSCI is also committed to working with and providing reasonable accommodations to individuals with disabilities. If you are an individual with a disability and would like to request a reasonable accommodation for any part of the application process, please email Disability.Assistance@msci.com and indicate the specifics of the assistance needed. Please note, this e-mail is intended only for individuals who are requesting a reasonable workplace accommodation; it is not intended for other inquiries. To all recruitment agencies MSCI does not accept unsolicited CVs/Resumes. Please do not forward CVs/Resumes to any MSCI employee, location, or website. MSCI is not responsible for any fees related to unsolicited CVs/Resumes. Note on recruitment scams We are aware of recruitment scams where fraudsters impersonating MSCI personnel may try and elicit personal information from job seekers. Read our full note on careers.msci.com Show more Show less
Posted 6 days ago
4.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
The Role The Data Engineer is accountable for developing high quality data products to support the Bank’s regulatory requirements and data driven decision making. A Data Engineer will serve as an example to other team members, work closely with customers, and remove or escalate roadblocks. By applying their knowledge of data architecture standards, data warehousing, data structures, and business intelligence they will contribute to business outcomes on an agile team. Responsibilities Developing and supporting scalable, extensible, and highly available data solutions Deliver on critical business priorities while ensuring alignment with the wider architectural vision Identify and help address potential risks in the data supply chain Follow and contribute to technical standards Design and develop analytical data models Required Qualifications & Work Experience First Class Degree in Engineering/Technology (4-year graduate course) 9 to 11 years’ experience implementing data-intensive solutions using agile methodologies Experience of relational databases and using SQL for data querying, transformation and manipulation Experience of modelling data for analytical consumers Ability to automate and streamline the build, test and deployment of data pipelines Experience in cloud native technologies and patterns A passion for learning new technologies, and a desire for personal growth, through self-study, formal classes, or on-the-job training Excellent communication and problem-solving skills An inclination to mentor; an ability to lead and deliver medium sized components independently Technical Skills (Must Have) ETL: Hands on experience of building data pipelines. Proficiency in two or more data integration platforms such as Ab Initio, Apache Spark, Talend and Informatica Big Data : Experience of ‘big data’ platforms such as Hadoop, Hive or Snowflake for data storage and processing Data Warehousing & Database Management : Expertise around Data Warehousing concepts, Relational (Oracle, MSSQL, MySQL) and NoSQL (MongoDB, DynamoDB) database design Data Modeling & Design : Good exposure to data modeling techniques; design, optimization and maintenance of data models and data structures Languages : Proficient in one or more programming languages commonly used in data engineering such as Python, Java or Scala DevOps : Exposure to concepts and enablers - CI/CD platforms, version control, automated quality control management Data Governance: A strong grasp of principles and practice including data quality, security, privacy and compliance Technical Skills (Valuable) Ab Initio : Experience developing Co>Op graphs; ability to tune for performance. Demonstrable knowledge across full suite of Ab Initio toolsets e.g., GDE, Express>IT, Data Profiler and Conduct>IT, Control>Center, Continuous>Flows Cloud : Good exposure to public cloud data platforms such as S3, Snowflake, Redshift, Databricks, BigQuery, etc. Demonstratable understanding of underlying architectures and trade-offs Data Quality & Controls : Exposure to data validation, cleansing, enrichment and data controls Containerization : Fair understanding of containerization platforms like Docker, Kubernetes File Formats : Exposure in working on Event/File/Table Formats such as Avro, Parquet, Protobuf, Iceberg, Delta Others : Experience of using a Job scheduler e.g., Autosys. Exposure to Business Intelligence tools e.g., Tableau, Power BI Certification on any one or more of the above topics would be an advantage. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Digital Software Engineering ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less
Posted 6 days ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
The Role The Data Engineer is accountable for developing high quality data products to support the Bank’s regulatory requirements and data driven decision making. A Data Engineer will serve as an example to other team members, work closely with customers, and remove or escalate roadblocks. By applying their knowledge of data architecture standards, data warehousing, data structures, and business intelligence they will contribute to business outcomes on an agile team. Responsibilities Developing and supporting scalable, extensible, and highly available data solutions Deliver on critical business priorities while ensuring alignment with the wider architectural vision Identify and help address potential risks in the data supply chain Follow and contribute to technical standards Design and develop analytical data models Required Qualifications & Work Experience First Class Degree in Engineering/Technology (4-year graduate course) 9 to 11 years’ experience implementing data-intensive solutions using agile methodologies Experience of relational databases and using SQL for data querying, transformation and manipulation Experience of modelling data for analytical consumers Ability to automate and streamline the build, test and deployment of data pipelines Experience in cloud native technologies and patterns A passion for learning new technologies, and a desire for personal growth, through self-study, formal classes, or on-the-job training Excellent communication and problem-solving skills An inclination to mentor; an ability to lead and deliver medium sized components independently Technical Skills (Must Have) ETL: Hands on experience of building data pipelines. Proficiency in two or more data integration platforms such as Ab Initio, Apache Spark, Talend and Informatica Big Data : Experience of ‘big data’ platforms such as Hadoop, Hive or Snowflake for data storage and processing Data Warehousing & Database Management : Expertise around Data Warehousing concepts, Relational (Oracle, MSSQL, MySQL) and NoSQL (MongoDB, DynamoDB) database design Data Modeling & Design : Good exposure to data modeling techniques; design, optimization and maintenance of data models and data structures Languages : Proficient in one or more programming languages commonly used in data engineering such as Python, Java or Scala DevOps : Exposure to concepts and enablers - CI/CD platforms, version control, automated quality control management Data Governance: A strong grasp of principles and practice including data quality, security, privacy and compliance Technical Skills (Valuable) Ab Initio : Experience developing Co>Op graphs; ability to tune for performance. Demonstrable knowledge across full suite of Ab Initio toolsets e.g., GDE, Express>IT, Data Profiler and Conduct>IT, Control>Center, Continuous>Flows Cloud : Good exposure to public cloud data platforms such as S3, Snowflake, Redshift, Databricks, BigQuery, etc. Demonstratable understanding of underlying architectures and trade-offs Data Quality & Controls : Exposure to data validation, cleansing, enrichment and data controls Containerization : Fair understanding of containerization platforms like Docker, Kubernetes File Formats : Exposure in working on Event/File/Table Formats such as Avro, Parquet, Protobuf, Iceberg, Delta Others : Experience of using a Job scheduler e.g., Autosys. Exposure to Business Intelligence tools e.g., Tableau, Power BI Certification on any one or more of the above topics would be an advantage. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Digital Software Engineering ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less
Posted 6 days ago
10.0 - 13.0 years
8 - 17 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Detailed job description - Skill Set: Looking for 10+ Y / highly experienced and deeply hands-on Data Architect to lead the design, build, and optimization of our data platforms on AWS and Databricks. This role requires a strong blend of architectural vision and direct implementation expertise, ensuring scalable, secure, and performant data solutions from concept to production. Strong hand on exp in data engineering/architecture, hands-on architectural and implementation experience on AWS and Databricks, Schema modeling . AWS: Deep hands-on expertise with key AWS data services and infrastructure. Databricks: Expert-level hands-on development with Databricks (Spark SQL, PySpark), Delta Lake, and Unity Catalog. Coding: Exceptional proficiency in Python , Pyspark , Spark , AWS Services and SQL. Architectural: Strong data modeling and architectural design skills with a focus on practical implementation. Preferred: AWS/Databricks certifications, experience with streaming technologies, and other data tools. Design & Build: Lead and personally execute the design, development, and deployment of complex data architectures and pipelines on AWS (S3, Glue, Lambda, Redshift, etc.) and Databricks (PySpark/Spark SQL, Delta Lake, Unity Catalog). Databricks Expertise: Own the hands-on development, optimization, and performance tuning of Databricks jobs, clusters, and notebooks. Mandatory Skills AWS, Databricks
Posted 6 days ago
6.0 - 10.0 years
12 - 15 Lacs
Chennai, Coimbatore, Mumbai (All Areas)
Work from Office
We have an urgent requirement for Role: (Senior Azure Data Engineer) Experience: 6 years. Notice Period: 0-15 days Max Position: C2H Should be able to work in Flexible timing. Communication should be excellent. Must Have: Strong understanding of ADF, Azure, Databricks, PySpark, Strong understanding of SQL, ADO, PowerBI, Unity Catalog is mandatory
Posted 6 days ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
About the Role: As a Director in Software Engineering, you will provide comprehensive leadership to senior managers and high-level professionals. You will have primary responsibility for the performance and results within your area, ensuring that all software engineering activities align with business strategies. Your role is crucial for steering the direction of major projects and technological advancements that will drive the company forward. Responsibilities: Provide strategic leadership and direction for the product software engineering department, aligning it with overall business objectives in the context of a matrixed organization. Communicate effectively in a matrixed organization with senior management, peers, and subordinates to ensure alignment and collaboration. Develop and define departmental objectives, strategies, and goals to drive the success of software projects. Establish and maintain positive interpersonal relationships within the department and with other stakeholders. Stay updated with relevant knowledge, technologies, and best practices to drive innovation within the department. Ensure compliance with quality standards and best practices in software development. Make critical decisions and solve complex problems related to software development and team management. Develop and build high-performing teams of software engineers, fostering their growth and productivity. Organize, plan, and prioritize the department's work to ensure efficient use of resources and timely project delivery. Utilize data analysis and information to drive data-driven decisions and measure the success of software products. Monitor development processes, framework adoptions, and project surroundings, optimizing efficiency and adherence to standards. Provide coaching and mentorship to team members, fostering their professional growth and development. Provide guidance and direction to subordinates, ensuring they align with the department's vision. Monitor ongoing processes, materials, or surroundings, providing feedback for continuous improvement. Evaluate information and software products to ensure compliance with industry standards. Skills: DevOps: An ability to use systems and processes to coordinate between development and operations teams in order to improve and speed up software development processes. This includes automation, continuous delivery, agility, and rapid response to feedback. Product Software Engineering: The ability to design, develop, test, and deploy software products. It involves understanding user needs, defining functional specifications, designing system architecture, coding, debugging, and ensuring product quality. It also requires knowledge of various programming languages, tools and methodologies, and ability to work within diverse teams and manage projects. Cloud Computing: The ability to utilize and manage applications, data, and services on the internet rather than on a personal computer or local server. This skill involves understanding various cloud services (like AWS, Google Cloud, Azure), managing resources online, and setting up cloud-based platforms for business environment. Implementation and Delivery: This is a skill that pertains to the ability to translate plans and designs into action. It involves executing strategies effectively, overseeing the delivery of projects or services, and ensuring they are completed in a timely and efficient manner. It also necessitates the coordination of various tasks and management of resources to achieve the set objectives. Problem Solving: The ability to understand a complex situation or issue and devise a solution by defining the problem, identifying potential strategies, and ultimately choosing and implementing the most effective course of action. People management: The ability to lead, motivate, engage and communicate effectively with a team. This includes skills in delegation, conflict resolution, negotiation, and understanding team dynamics. It also involves building a strong team culture and managing individual performance. Agile: The ability to swiftly and effectively respond to changes, with an emphasis on continuous improvement and flexibility. In the context of project management, it denotes a methodology that promotes adaptive planning and encourages rapid and flexible responses to changes. APIs: The ability to design, develop, and manage Application Programming Interfaces, which constitute the set of protocols and tools used for building application software. This skill includes the capacity to create and maintain high-quality API documentation, implement API security practices, and understand API testing techniques. Additionally, having this ability means understanding how APIs enable interaction between different software systems, allowing them to communicate with each other. Analysis: The ability to examine complex situations or problems, break them down into smaller parts, and understand how these parts work together. Automation: The ability to design, implement, manage, and optimize automated systems or processes, often using various software tools and technologies. This skill includes understanding both the technical elements and the business implications of automated systems. Frameworks: The ability to understand, utilise, and create structured environments for software development. This skill also involves being able to leverage existing frameworks to streamline processes, ensuring better efficiency and code manageability in software development projects. Financial Budget management: The ability to plan, coordinate, control, and execute financial resources over a certain period, and make decisions on distribution of resources efficiently and effectively. This includes estimating revenues, costs and expenses, and ensuring they align with the set goals or targets. Application Security: The ability to protect applications from threats and attacks by identifying, fixing, and preventing security vulnerabilities. This skill involves the use of software methods and systems to protect applications against security threats. Architectural patterns: The ability to understand, analyze, and apply predefined design solutions to structural problems in architecture and software development. This skill involves applying proven patterns to resolve complex design challenges and create efficient and scalable structures, maintaining balance between functional requirements and aesthetic appeal. Competencies: Judgement & Decision Making Accountability Inclusive Collaboration Inspiration & Alignment Courage to Take Smart Risks Financial Acumen Applicants may be required to appear onsite at a Wolters Kluwer office as part of the recruitment process. Show more Show less
Posted 6 days ago
1.0 - 6.0 years
3 - 8 Lacs
Hyderabad
Work from Office
Role Description: The role is responsible for designing, developing, and maintaining software solutions for Research scientists . Additionally, it involves automating operations, monitoring system health, and responding to incidents to minimize downtime. Y ou will join a multi-functional team of scientists and software professionals that enables technology and data capabilities to evaluate drug candidates and assess their abilities to affect the biology of drug targets. This team implements scientific software platforms that enable the capture, analysis, storage, and report ing for our Large Molecule Discovery Research team (Design, Make, Test and Analyze processes) . The team also interfaces heavily with teams supporting our in vi tro assay management systems and our compound inventory platforms . The ideal candidate possesses experience in the pharmaceutical or biotech industry, strong technical skills, and full stack software engineering experience (spanning SQL, back-end, front-end web technologies, automated testing ). Roles & Responsibilities: Work closely with product team, business team including scientists, and other stakeholders Analyze and understand the functional and technical requirements of applications, solutions and systems and translate them into software architecture and design specifications Design, develop, and implement applications and modules, including custom reports, interfaces, and enhancements Develop and execute unit tests, integration tests, and other testing strategies to ensure the quality of the software Conduct code reviews to ensure code quality and adherence to best practices Create and maintain documentation on software architecture, design, deployment, disaster recovery, and operations Provide ongoing support and maintenance for applications, ensuring that they operate smoothly and efficiently Stay updated with the latest technology and security trends and advancements Basic Qualifications and Experience: Masters degree with 1 - 3 years of experience in Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field OR Bachelors degree with 4 - 6 years of experience in Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field OR Diploma with 7 - 9 years of experience in Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field Preferred Qualifications and Experience: 1+ years of experience in implementing and supporting biopharma scientific software platforms Functional Skills: Must-Have Skills : Proficient in Java or Python Proficient in at least one JavaScript UI Framework (e.g.ExtJS, React, or Angular) Proficient in SQL (e.g. Oracle, PostgreSQL, Databricks) Good-to-Have Skills: Experience with event-based architecture and serverless AWS services such as EventBridge, SQS, Lambda or ECS. Experience with Benchling Hands-on experience with Full Stack software development Strong understanding of software development methodologies, mainly Agile and Scrum Working experience with DevOps practices and CI/CD pipelines Experience of infrastructure as code (IaC) tools (Terraform, CloudFormation) Experience with monitoring and logging tools (e.g., Prometheus, Grafana, Splunk) Experiencewith automated testing tools and frameworks Experience with big data technologies (e.g., Spark, Databricks, Kafka) Experience with leveraging the use of AI-assistants (e.g. GitHub Copilot) to accelerate software development and improve code quality Professional Certifications (please mention if the certification is preferred or mandatory for the role): AWS Certified Cloud Practitioner preferred Soft Skills: Excellent problem solving, analytical, and troubleshooting skills Strong communication and interpersonal skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation Ability to learn quickly & work independently Team-oriented, with a focus on achieving team goals Ability to manage multiple priorities successfully Strong presentation and public speaking skills.
Posted 6 days ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Role : MLOps Engineer Location - Chennai - CKC Mode of Interview - In Person Data - 7th June 2025 (Saturday) Key Words -Skillset AWS SageMaker, Azure ML Studio, GCP Vertex AI PySpark, Azure Databricks MLFlow, KubeFlow, AirFlow, Github Actions, AWS CodePipeline Kubernetes, AKS, Terraform, Fast API Responsibilities Model Deployment, Model Monitoring, Model Retraining Deployment pipeline, Inference pipeline, Monitoring pipeline, Retraining pipeline Drift Detection, Data Drift, Model Drift Experiment Tracking MLOps Architecture REST API publishing Job Responsibilities Research and implement MLOps tools, frameworks and platforms for our Data Science projects. Work on a backlog of activities to raise MLOps maturity in the organization. Proactively introduce a modern, agile and automated approach to Data Science. Conduct internal training and presentations about MLOps tools’ benefits and usage. Required Experience And Qualifications Wide experience with Kubernetes. Experience in operationalization of Data Science projects (MLOps) using at least one of the popular frameworks or platforms (e.g. Kubeflow, AWS Sagemaker, Google AI Platform, Azure Machine Learning, DataRobot, DKube). Good understanding of ML and AI concepts. Hands-on experience in ML model development. Proficiency in Python used both for ML and automation tasks. Good knowledge of Bash and Unix command line toolkit. Experience in CI/CD/CT pipelines implementation. Experience with cloud platforms - preferably AWS - would be an advantage. Show more Show less
Posted 6 days ago
8.0 - 13.0 years
10 - 15 Lacs
Bengaluru
Work from Office
Qualification & Experience: Minimum of 8 years of experience as a Data Scientist/Engineer with demonstrated expertise in data engineering and cloud computing technologies. Technical Responsibilities Excellent proficiency in Python, with a strong focus on developing advanced skills. Extensive exposure to NLP and image processing concepts. Proficient in version control systems like Git. In-depth understanding of Azure deployments. Expertise in OCR, ML model training, and transfer learning. Experience working with unstructured data formats such as PDFs, DOCX, and images. O Strong familiarity with data science best practices and the ML lifecycle. Strong experience with data pipeline development, ETL processes, and data engineering tools such as Apache Airflow, PySpark, or Databricks. Familiarity with cloud computing platforms like Azure, AWS, or GCP, including services like Azure Data Factory, S3, Lambda, and BigQuery. Tool Exposure: Advanced understanding and hands-on experience with Git, Azure, Python, R programming and data engineering tools such as Snowflake, Databricks, or PySpark. Data mining, cleaning and engineering: Leading the identification and merging of relevant data sources, ensuring data quality, and resolving data inconsistencies. Cloud Solutions Architecture: Designing and deploying scalable data engineering workflows on cloud platforms such as Azure, AWS, or GCP. Data Analysis : Executing complex analyses against business requirements using appropriate tools and technologies. Software Development : Leading the development of reusable, version-controlled code under minimal supervision. Big Data Processing : Developing solutions to handle large-scale data processing using tools like Hadoop, Spark, or Databricks. Principal Duties & Key Responsibilities: Leading data extraction from multiple sources, including PDFs, images, databases, and APIs. Driving optical character recognition (OCR) processes to digitize data from images. Applying advanced natural language processing (NLP) techniques to understand complex data. Developing and implementing highly accurate statistical models and data engineering pipelines to support critical business decisions and continuously monitor their performance. Designing and managing scalable cloud-based data architectures using Azure, AWS, or GCP services. Collaborating closely with business domain experts to identify and drive key business value drivers. Documenting model design choices, algorithm selection processes, and dependencies. Effectively collaborating in cross-functional teams within the CoE and across the organization. Proactively seeking opportunities to contribute beyond assigned tasks. Required Competencies: Exceptional communication and interpersonal skills. Proficiency in Microsoft Office 365 applications. Ability to work independently, demonstrate initiative, and provide strategic guidance. Strong networking, communication, and people skills. Outstanding organizational skills with the ability to work independently and as part of a team. Excellent technical writing skills. Effective problem-solving abilities. Flexibility and adaptability to work flexible hours as required. Key competencies / Values: Client Focus : Tailoring skills and understanding client needs to deliver exceptional results. Excellence : Striving for excellence defined by clients, delivering high-quality work. Trust : Building and retaining trust with clients, colleagues, and partners. Teamwork : Collaborating effectively to achieve collective success. Responsibility : Taking ownership of performance and safety, ensuring accountability. People : Creating an inclusive environment that fosters individual growth and development.
Posted 6 days ago
12.0 - 14.0 years
14 - 18 Lacs
Hyderabad, Bengaluru
Hybrid
Looking for 10+ Y / highly experienced and deeply hands-on Data Architect to lead the design, build, and optimization of our data platforms on AWS and Databricks. This role requires a strong blend of architectural vision and direct implementation expertise, ensuring scalable, secure, and performant data solutions from concept to production. Strong hand on exp in data engineering/architecture, hands-on architectural and implementation experience on AWS and Databricks, Schema modeling . AWS: Deep hands-on expertise with key AWS data services and infrastructure. Databricks: Expert-level hands-on development with Databricks (Spark SQL, PySpark), Delta Lake, and Unity Catalog. Coding: Exceptional proficiency in Python , Pyspark , Spark , AWS Services and SQL. Architectural: Strong data modeling and architectural design skills with a focus on practical implementation. Preferred: AWS/Databricks certifications, experience with streaming technologies, and other data tools. Design & Build: Lead and personally execute the design, development, and deployment of complex data architectures and pipelines on AWS (S3, Glue, Lambda, Redshift, etc.) and Databricks (PySpark/Spark SQL, Delta Lake, Unity Catalog). Databricks Expertise: Own the hands-on development, optimization, and performance tuning of Databricks jobs, clusters, and notebooks.
Posted 6 days ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Company Overview Viraaj HR Solutions is dedicated to connecting top talent with forward-thinking companies. Our mission is to provide exceptional talent acquisition services while fostering a culture of trust, integrity, and collaboration. We prioritize our clients' needs and work tirelessly to ensure the ideal candidate-job match. Join us in our commitment to excellence and become part of a dynamic team focused on driving success for individuals and organizations alike. Role Responsibilities Design, develop, and implement data pipelines using Azure Data Factory. Create and maintain data models for structured and unstructured data. Extract, transform, and load (ETL) data from various sources into data warehouses. Develop analytical solutions and dashboards using Azure Databricks. Perform data integration and migration tasks with Azure tools. Ensure optimal performance and scalability of data solutions. Collaborate with cross-functional teams to understand data requirements. Utilize SQL Server for database management and data queries. Implement data quality checks and ensure data integrity. Work on data governance and compliance initiatives. Monitor and troubleshoot data pipeline issues to ensure reliability. Document data processes and architecture for future reference. Stay current with industry trends and Azure advancements. Train and mentor junior data engineers and team members. Participate in design reviews and provide feedback for process improvements. Qualifications Bachelor's degree in Computer Science, Information Technology, or a related field. 3+ years of experience in a data engineering role. Strong expertise in Azure Data Factory and Azure Databricks. Proficient in SQL for data manipulation and querying. Experience with data warehousing concepts and practices. Familiarity with ETL tools and processes. Knowledge of Python or other programming languages for data processing. Ability to design scalable cloud architecture. Experience with data modeling and database design. Effective communication and collaboration skills. Strong analytical and problem-solving abilities. Familiarity with performance tuning and optimization techniques. Knowledge of data visualization tools is a plus. Experience with Agile methodologies. Ability to work independently and manage multiple tasks. Willingness to learn and adapt to new technologies. Skills: etl,azure databricks,sql server,azure,data governance,azure data factory,python,data warehousing,data engineer,data integration,performance tuning,python scripting,sql,data modeling,data migration,data visualization,analytical solutions,pyspark,agile methodologies,data quality checks Show more Show less
Posted 6 days ago
0 years
0 Lacs
Greater Hyderabad Area
On-site
Hyderabad, Telangana | Full Time Apply Now Neudesic is currently seeking Senior Data Scientists. This role requires the perfect mix of being a brilliant technologist and also a deep appreciation for how technology drives business value. You will have broad and deep technology knowledge and the ability to architect solutions by mapping client business problems to end-to-end technology solutions. Demonstrated ability to engage senior level technology decision makers in data management, real-time analytics, predictive analytics and data visualization is a must. To be successful, you must exhibit strong leadership qualities necessary for building trust with clients and our technologists, with the ability to deliver ML/DL projects to successful completion. You will partner with solution architects to drive client success by providing practical guidance based on your years of experience in data management and visualization solutions. You will partner with a diverse sales unit to professionally represent Neudesic experience and ability to drive business results. In addition, you will assist in creating sales assets that clearly communicate our value proposition to technical decision makers. Experience: 6+yrs Primary Skills: Python. SQL,ML,NLP,Data models,Data Insights Strong mathematical skills to help collect, measure, organize and analyze data. Knowledge of programming languages like SQL and Python Technical proficiency regarding database design development, data models, techniques for data mining, and segmentation. Proficiency in statistics and statistical packages like Excel to be used for data set analyzing. Knowledge of data visualization software like PowerBI is desirable. Knowledge of how to create and apply the most accurate algorithms to datasets in order to find solutions. Problem-solving skills Adept at queries, writing reports, and making presentations. Team-working skills Verbal and Written communication skills Proven working experience in data analysis. Job Description 2 Expertise on a broad set of ML approaches and techniques, ranging from Artificial Neural Networks to Bayesian Non-Parametric methods, model preparation and selection (feature engineering, PCA, model assessment), and modeling techniques (optimization, simulation) Proficiency in data analysis, modeling, and web services in Python. GPU programming experience. Natural Language Processing experience – Ontology detection/Named Entity recognition and disambiguation and Predictive Analytics experience a plus. Familiarity with existing ML stack (Azure Machine Learning, scikit-learn, Tensorflow, Keras and others) SQL/NoSQL experience Experience with Apache spark with Databricks or similar platform for crunching massive amount of data Experience in leveraging AI in the CX (Customer Experience) domain a plus: Service (Topic Analysis, Aspect based sentiment analysis, NLP in Service context), Pre-Sales (Segmentation and Propensity Models) as well as Customer Success (Sentiment analysis, Best-Agent to Route, Churn Prediction, Customer Health Score, Recommender Systems for Next Best Action) using Machine Learning and Data Science for Recurring Revenue based business models Software Development Skills Experience in SQL and development experience in at least one scripting language (Python, Perl, etc.), and one high level programming language (Java) Experience in containerized applications on cloud (Azure Kubernetes Service), cloud databases (Azure SQL), and data storage (Azure Data Lake Storage, File storage) More About Our Predictive Enterprise Service Line The digital business uses data as a competitive differentiator. The explosion of big data, machine learning and cloud computing power creates an opportunity to make a quantum leap forward in business understanding and customer engagement. The availability of massive amounts of information, massive computing power and advancements in artificial intelligence allow the digital business to more accurately predict, plan for and capture opportunity unlike ever before. The predictive enterprise service line the evolution from using data strictly as a reporting mechanism of what’s happened to leveraging the latest in advanced analytics to predict and prescribe future business action. Our services include: Data Management Solutions: We build architectures, policies, practices and procedures that manage the full data lifecycle of an enterprise. We bring internal and exogenous datasets together to formulate new perspectives and drive to data-thinking. Self-Service Data Solutions: We create classic self-service and modern data-blending solutions that enable end-users to enrich pre-authored analytic reports by blending them with additional data sources. Real-Time Analytic Solutions: We build real-time analytics solutions on data-in-motion that eliminate the dependency on stale and static data sets resulting in the ability to immediately query and analyze diverse data sets. Machine Learning Solutions: We build machine-learning solutions that support the most complex decision support systems Neudesic is an Equal Opportunity Employer Neudesic provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by local laws. Neudesic is an IBM subsidiary which has been acquired by IBM and will be integrated into the IBM organization. Neudesic will be the hiring entity. By proceeding with this application, you understand that Neudesic will share your personal information with other IBM companies involved in your recruitment process, wherever these are located. More Information on how IBM protects your personal information, including the safeguards in case of cross-border data transfer, are available here: https://www.ibm.com/us-en/privacy?lnk=flg-priv-usen Submit Your Application You have successfully applied You have errors in applying Apply With Resume * First Name* Middle Name Last Name* Email* Mobile Phone Candidate Location--Choose-- Bengaluru Hyderabad Kochi Other Social Network and Web Links Provide us with links to see some of your work (Git/ Dribble/ Behance/ Pinterest/ Blog/ Medium) Cover Letter Attach a file < 2 MB Show more Show less
Posted 6 days ago
3.0 - 6.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Responsibilities: Develop and execute test scripts to validate data pipelines, transformations, and integrations. Formulate and maintain test strategies—including smoke, performance, functional, and regression testing—to ensure data processing and ETL jobs meet requirements. Collaborate with development teams to assess changes in data workflows and update test cases to preserve data integrity. Design and run tests for data validation, storage, and retrieval using Azure services like Data Lake, Synapse, and Data Factory, adhering to industry standards. Continuously enhance automated tests as new features are developed, ensuring timely delivery per defined quality standards. Participate in data reconciliation and verify Data Quality frameworks to maintain data accuracy, completeness, and consistency across the platform. Share knowledge and best practices by collaborating with business analysts and technology teams to document testing processes and findings. Communicate testing progress effectively with stakeholders, highlighting issues or blockers, and ensuring alignment with business objectives. Maintain a comprehensive understanding of the Azure Data Lake platform's data landscape to ensure thorough testing coverage. Skills & Experience: 3-6 years of QA experience with a strong focus on Big Data testing, particularly in Data Lake environments on Azure's cloud platform. Proficient in Azure Data Factory, Azure Synapse Analytics and Databricks for big data processing and scaled data quality checks. Proficiency in SQL, capable of writing and optimizing both simple and complex queries for data validation and testing purposes. Proficient in PySpark, with experience in data manipulation and transformation, and a demonstrated ability to write and execute test scripts for data processing and validation. Hands-on experience with Functional & system integration testing in big data environments, ensuring seamless data flow and accuracy across multiple systems. Knowledge and ability to design and execute test cases in a behaviour-driven development environment. Fluency in Agile methodologies, with active participation in Scrum ceremonies and a strong understanding of Agile principles. Familiarity with tools like Jira, including experience with X-Ray or Jira Zephyr for defect management and test case management. Proven experience working on high-traffic and large-scale software products, ensuring data quality, reliability, and performance under demanding conditions. Show more Show less
Posted 6 days ago
8.0 years
0 Lacs
India
Remote
Who We Are At Twilio, we’re shaping the future of communications, all from the comfort of our homes. We deliver innovative solutions to hundreds of thousands of businesses and empower millions of developers worldwide to craft personalized customer experiences. Our dedication to remote-first work, and strong culture of connection and global inclusion means that no matter your location, you’re part of a vibrant team with diverse experiences making a global impact each day. As we continue to revolutionize how the world interacts, we’re acquiring new skills and experiences that make work feel truly rewarding. Your career at Twilio is in your hands. See yourself at Twilio Join the team as our next Staff Backend Engineer on Twilio’s Segment Engineering teams. About The Job As a Staff Backend Engineer on the Twilio Segment Engineering team, you’ll help us build and scale systems that support the leading Customer Data Platform (CDP) in a rapidly evolving and competitive market. Our products process billions of data points per hour, enabling customers to orchestrate and activate their data efficiently and flexibly. Segment provides a best-in-class data infrastructure and orchestration platform that supports a wide range of customer use cases, from identity resolution to real-time audience segmentation. As an engineer on this team, you will be responsible for designing, developing, and optimizing backend services that power data pipelines, APIs, and event-driven architectures. If you thrive in fast-moving environments, enjoy working on scalable systems, and are passionate about building high-performance backend services, this role is for you. Responsibilities In this role, you’ll: Design, develop, and maintain backend services that power Twilio Segment’s high scale data platform. Build scalable and high-performance APIs and data pipelines to support customer data orchestration. Improve the reliability, scalability, and efficiency of Segment’s backend systems. Collaborate with cross-functional teams including product, design, and infrastructure to deliver customer-focused solutions. Drive best practices in software engineering, including code reviews, testing, and deployment processes. Ensure high operational excellence by monitoring, troubleshooting, and maintaining always-on cloud services. Contribute to architectural discussions and technical roadmaps that align with Twilio’s CXaaS vision and Segment’s strategic initiatives. Qualifications Not all applicants will have skills that match a job description exactly. Twilio values diverse experiences in other industries, and we encourage everyone who meets the required qualifications to apply. While having “desired” qualifications make for a strong candidate, we encourage applicants with alternative experiences to also apply. If your career is just starting or hasn't followed a traditional path, don't let that stop you from considering Twilio. We are always looking for people who will bring something new to the table! Required: 8+ years of experience writing production-grade backend code in a modern programming language (e.g., Golang, Python, Java, Scala, or similar). Strong fundamentals and experience in building fault tolerant distributed systems, event-driven architectures, and database design. Experience working with AWS cloud-based infrastructure. Well-versed in designing and building high-scale, low-latency APIs. Solid grasp of Linux systems and networking concepts. Strong debugging and troubleshooting skills for complex distributed applications. Experience shipping services (products) following the CI/CD development paradigm. Effective communication skills and ability to collaborate in a fast-paced team environment. Comfortable with ambiguity and problem-solving in a rapidly growing company. Desired Experience working with event streaming technologies (Kafka, Pulsar, or similar). Experience with database technologies like PostgreSQL, DynamoDB, or Databricks SQ>. Familiarity with containerization and orchestration tools (Docker, Kubernetes). Background in building multi-tenant SaaS platforms at scale. Experience working with observability tools such as Prometheus, Grafana, or Datadog. Experience working in a geographically distributed team. Location This role will be remote and based in India (Karnataka, Tamil Nadu, Telangana, Maharashtra, Delhi). Travel We prioritize connection and opportunities to build relationships with our customers and each other. For this role, you may be required to travel occasionally to participate in project or team in-person meetings. What We Offer Working at Twilio offers many benefits, including competitive pay, generous time off, ample parental and wellness leave, healthcare, a retirement savings program, and much more. Offerings vary by location. Twilio thinks big. Do you? We like to solve problems, take initiative, pitch in when needed, and are always up for trying new things. That's why we seek out colleagues who embody our values — something we call Twilio Magic. Additionally, we empower employees to build positive change in their communities by supporting their volunteering and donation efforts. So, if you're ready to unleash your full potential, do your best work, and be the best version of yourself, apply now! If this role isn't what you're looking for, please consider other open positions. Twilio is proud to be an equal opportunity employer. We do not discriminate based upon race, religion, color, national origin, sex (including pregnancy, childbirth, reproductive health decisions, or related medical conditions), sexual orientation, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, genetic information, political views or activity, or other applicable legally protected characteristics. We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law. Qualified applicants with arrest or conviction records will be considered for employment in accordance with the Los Angeles County Fair Chance Ordinance for Employers and the California Fair Chance Act. Additionally, Twilio participates in the E-Verify program in certain locations, as required by law. Show more Show less
Posted 6 days ago
7.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Job Description The candidate must possess knowledge relevant to the functional area, and act as a subject matter expert in providing advice in the area of expertise, and also focus on continuous improvement for maximum efficiency. It is vital to focus on the high standard of delivery excellence, provide top-notch service quality and develop successful long-term business partnerships with internal/external customers by identifying and fulfilling customer needs. He/she should be able to break down complex problems into logical and manageable parts in a systematic way, and generate and compare multiple options, and set priorities to resolve problems. The ideal candidate must be proactive, and go beyond expectations to achieve job results and create new opportunities. He/she must positively influence the team, motivate high performance, promote a friendly climate, give constructive feedback, provide development opportunities, and manage career aspirations of direct reports. Communication skills are key here, to explain organizational objectives, assignments, and the big picture to the team, and to articulate team vision and clear objectives. Senior Process Manager Roles And Responsibilities We are seeking a talented and motivated Data Engineer to join our dynamic team. The ideal candidate will have a deep understanding of data integration processes and experience in developing and managing data pipelines using Python, SQL, and PySpark within Databricks. You will be responsible for designing robust backend solutions, implementing CI/CD processes, and ensuring data quality and consistency. Data Pipeline Development: Using Data bricks features to explore raw datasets and understand their structure. Creating and optimizing Spark-based workflows. Create end-to-end data processing pipelines, including ingesting raw data, transforming it, and running analyses on the processed data. Create and maintain data pipelines using Python and SQL. Solution Design and Architecture: Design and architect backend solutions for data integration, ensuring they are robust, scalable, and aligned with business requirements. Implement data processing pipelines using various technologies, including cloud platforms, big data tools, and streaming frameworks. Automation and Scheduling: Automate data integration processes and schedule jobs on servers to ensure seamless data flow. Data Quality and Monitoring: Develop and implement data quality checks and monitoring systems to ensure data accuracy and consistency. CI/CD Implementation: Use Jenkins and Bit bucket to create and maintain metadata and job files. Implement continuous integration and continuous deployment (CI/CD) processes in both development and production environments to deploy data pipelines efficiently. Collaboration and Documentation: Work effectively with cross-functional teams, including software engineers, data scientists, and DevOps, to ensure successful project delivery. Document data pipelines and architecture to ensure knowledge transfer and maintainability. Participate in stakeholder interviews, workshops, and design reviews to define data models, pipelines, and workflows. Technical And Functional Skills Education and Experience: Bachelor’s Degree with 7+ years of experience, including at least 3+ years of hands-on experience in SQL/ and Python. Technical Proficiency: Proficiency in writing and optimizing SQL queries in MySQL and SQL Server. Expertise in Python for writing reusable components and enhancing existing ETL scripts. Solid understanding of ETL concepts and data pipeline architecture, including CDC, incremental loads, and slowly changing dimensions (SCDs). Hands-on experience with PySpark. Knowledge and experience with using Data bricks will be a bonus. Familiarity with data warehousing solutions and ETL processes. Understanding of data architecture and backend solution design. Cloud and CI/CD Experience: Experience with cloud platforms such as AWS, Azure, or Google Cloud. Familiarity with Jenkins and Bit bucket for CI/CD processes. Additional Skills: Ability to work independently and manage multiple projects simultaneously. About Us At eClerx, we serve some of the largest global companies – 50 of the Fortune 500 clients. Our clients call upon us to solve their most complex problems, and deliver transformative insights. Across roles and levels, you get the opportunity to build expertise, challenge the status quo, think bolder, and help our clients seize value About The Team eClerx is a global leader in productized services, bringing together people, technology and domain expertise to amplify business results. Our mission is to set the benchmark for client service and success in our industry. Our vision is to be the innovation partner of choice for technology, data analytics and process management services. Since our inception in 2000, we've partnered with top companies across various industries, including financial services, telecommunications, retail, and high-tech. Our innovative solutions and domain expertise help businesses optimize operations, improve efficiency, and drive growth. With over 18,000 employees worldwide, eClerx is dedicated to delivering excellence through smart automation and data-driven insights. At eClerx, we believe in nurturing talent and providing hands-on experience. eClerx is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability or protected veteran status, or any other legally protected basis, per applicable law. Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Company Overview Viraaj HR Solutions is dedicated to connecting top talent with forward-thinking companies. Our mission is to provide exceptional talent acquisition services while fostering a culture of trust, integrity, and collaboration. We prioritize our clients' needs and work tirelessly to ensure the ideal candidate-job match. Join us in our commitment to excellence and become part of a dynamic team focused on driving success for individuals and organizations alike. Role Responsibilities Design, develop, and implement data pipelines using Azure Data Factory. Create and maintain data models for structured and unstructured data. Extract, transform, and load (ETL) data from various sources into data warehouses. Develop analytical solutions and dashboards using Azure Databricks. Perform data integration and migration tasks with Azure tools. Ensure optimal performance and scalability of data solutions. Collaborate with cross-functional teams to understand data requirements. Utilize SQL Server for database management and data queries. Implement data quality checks and ensure data integrity. Work on data governance and compliance initiatives. Monitor and troubleshoot data pipeline issues to ensure reliability. Document data processes and architecture for future reference. Stay current with industry trends and Azure advancements. Train and mentor junior data engineers and team members. Participate in design reviews and provide feedback for process improvements. Qualifications Bachelor's degree in Computer Science, Information Technology, or a related field. 3+ years of experience in a data engineering role. Strong expertise in Azure Data Factory and Azure Databricks. Proficient in SQL for data manipulation and querying. Experience with data warehousing concepts and practices. Familiarity with ETL tools and processes. Knowledge of Python or other programming languages for data processing. Ability to design scalable cloud architecture. Experience with data modeling and database design. Effective communication and collaboration skills. Strong analytical and problem-solving abilities. Familiarity with performance tuning and optimization techniques. Knowledge of data visualization tools is a plus. Experience with Agile methodologies. Ability to work independently and manage multiple tasks. Willingness to learn and adapt to new technologies. Skills: etl,azure databricks,sql server,azure,data governance,azure data factory,python,data warehousing,data engineer,data integration,performance tuning,python scripting,sql,data modeling,data migration,data visualization,analytical solutions,pyspark,agile methodologies,data quality checks Show more Show less
Posted 1 week ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Gurgaon/Bangalore, India AXA XL recognizes data and information as critical business assets, both in terms of managing risk and enabling new business opportunities. This data should not only be high quality, but also actionable - enabling AXA XL’s executive leadership team to maximize benefits and facilitate sustained industrious advantage. Our Chief Data Office also known as our Innovation, Data Intelligence & Analytics team (IDA) is focused on driving innovation through optimizing how we leverage data to drive strategy and create a new business model - disrupting the insurance market. As we develop an enterprise-wide data and digital strategy that moves us toward greater focus on the use of data and data-driven insights, we are seeking an Data Engineer. The role will support the team’s efforts towards creating, enhancing, and stabilizing the Enterprise data lake through the development of the data pipelines. This role requires a person who is a team player and can work well with team members from other disciplines to deliver data in an efficient and strategic manner . What You’ll Be DOING What will your essential responsibilities include? Act as a data engineering expert and partner to Global Technology and data consumers in controlling complexity and cost of the data platform, whilst enabling performance, governance, and maintainability of the estate. Understand current and future data consumption patterns, architecture (granular level), partner with Architects to ensure optimal design of data layers. Apply best practices in Data architecture. For example, balance between materialization and virtualization, optimal level of de-normalization, caching and partitioning strategies, choice of storage and querying technology, performance tuning. Leading and hands-on execution of research into new technologies. Formulating frameworks for assessment of new technology vs business benefit, implications for data consumers. Act as a best practice expert, blueprint creator of ways of working such as testing, logging, CI/CD, observability, release, enabling rapid growth in data inventory and utilization of Data Science Platform. Design prototypes and work in a fast-paced iterative solution delivery model. Design, Develop and maintain ETL pipelines using Pyspark in Azure Databricks using delta tables. Use Harness for deployment pipeline. Monitor Performance of ETL Jobs, resolve any issue that arose and improve the performance metrics as needed. Diagnose system performance issue related to data processing and implement solution to address them. Collaborate with other teams to ensure successful integration of data pipelines into larger system architecture requirement. Maintain integrity and quality across all pipelines and environments. Understand and follow secure coding practice to make sure code is not vulnerable. You will report to Technical Lead. What You Will BRING We’re looking for someone who has these abilities and skills: Required Skills And Abilities Effective Communication skills. Bachelor’s degree in computer science, Mathematics, Statistics, Finance, related technical field, or equivalent work experience. Relevant years of extensive work experience in various data engineering & modeling techniques (relational, data warehouse, semi-structured, etc.), application development, advanced data querying skills. Relevant years of programming experience using Databricks. Relevant years of experience using Microsoft Azure suite of products (ADF, synapse and ADLS). Solid knowledge on network and firewall concepts. Solid experience writing, optimizing and analyzing SQL. Relevant years of experience with Python. Ability to break complex data requirements and architect solutions into achievable targets. Robust familiarity with Software Development Life Cycle (SDLC) processes and workflow, especially Agile. Experience using Harness. Technical lead responsible for both individual and team deliveries. Desired Skills And Abilities Worked in big data migration projects. Worked on performance tuning both at database and big data platforms. Ability to interpret complex data requirements and architect solutions. Distinctive problem-solving and analytical skills combined with robust business acumen. Excellent basics on parquet files and delta files. Effective Knowledge of Azure cloud computing platform. Familiarity with Reporting software - Power BI is a plus. Familiarity with DBT is a plus. Passion for data and experience working within a data-driven organization. You care about what you do, and what we do. Who WE are AXA XL, the P&C and specialty risk division of AXA, is known for solving complex risks. For mid-sized companies, multinationals and even some inspirational individuals we don’t just provide re/insurance, we reinvent it. How? By combining a comprehensive and efficient capital platform, data-driven insights, leading technology, and the best talent in an agile and inclusive workspace, empowered to deliver top client service across all our lines of business − property, casualty, professional, financial lines and specialty. With an innovative and flexible approach to risk solutions, we partner with those who move the world forward. Learn more at axaxl.com What we OFFER Inclusion AXA XL is committed to equal employment opportunity and will consider applicants regardless of gender, sexual orientation, age, ethnicity and origins, marital status, religion, disability, or any other protected characteristic. At AXA XL, we know that an inclusive culture and enables business growth and is critical to our success. That’s why we have made a strategic commitment to attract, develop, advance and retain the most inclusive workforce possible, and create a culture where everyone can bring their full selves to work and reach their highest potential. It’s about helping one another — and our business — to move forward and succeed. Five Business Resource Groups focused on gender, LGBTQ+, ethnicity and origins, disability and inclusion with 20 Chapters around the globe. Robust support for Flexible Working Arrangements Enhanced family-friendly leave benefits Named to the Diversity Best Practices Index Signatory to the UK Women in Finance Charter Learn more at axaxl.com/about-us/inclusion-and-diversity. AXA XL is an Equal Opportunity Employer. Total Rewards AXA XL’s Reward program is designed to take care of what matters most to you, covering the full picture of your health, wellbeing, lifestyle and financial security. It provides competitive compensation and personalized, inclusive benefits that evolve as you do. We’re committed to rewarding your contribution for the long term, so you can be your best self today and look forward to the future with confidence. Sustainability At AXA XL, Sustainability is integral to our business strategy. In an ever-changing world, AXA XL protects what matters most for our clients and communities. We know that sustainability is at the root of a more resilient future. Our 2023-26 Sustainability strategy, called “Roots of resilience”, focuses on protecting natural ecosystems, addressing climate change, and embedding sustainable practices across our operations. Our Pillars Valuing nature: How we impact nature affects how nature impacts us. Resilient ecosystems - the foundation of a sustainable planet and society - are essential to our future. We’re committed to protecting and restoring nature - from mangrove forests to the bees in our backyard - by increasing biodiversity awareness and inspiring clients and colleagues to put nature at the heart of their plans. Addressing climate change: The effects of a changing climate are far-reaching and significant. Unpredictable weather, increasing temperatures, and rising sea levels cause both social inequalities and environmental disruption. We're building a net zero strategy, developing insurance products and services, and mobilizing to advance thought leadership and investment in societal-led solutions. Integrating ESG: All companies have a role to play in building a more resilient future. Incorporating ESG considerations into our internal processes and practices builds resilience from the roots of our business. We’re training our colleagues, engaging our external partners, and evolving our sustainability governance and reporting. AXA Hearts in Action: We have established volunteering and charitable giving programs to help colleagues support causes that matter most to them, known as AXA XL’s “Hearts in Action” programs. These include our Matching Gifts program, Volunteering Leave, and our annual volunteering day - the Global Day of Giving. For more information, please see axaxl.com/sustainability. Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
Data Analyst2 We are seeking a highly skilled **Technical Data Analyst** to join our team and play a key role in building a **single source of truth** for our high-volume, direct-to-consumer accounting and financial data warehouse. The ideal candidate will have a strong background in data analysis, SQL, and data transformation, with experience in financial data warehousing and reporting. This role will involve working closely with finance and accounting teams to gather requirements, build dashboards, and transform data to support month-end accounting, tax reporting, and financial forecasting. The financial data warehouse is currently built in **Snowflake** and will be migrated to **Databricks**. The candidate will be responsible for transitioning reporting and transformation processes to Databricks while ensuring data accuracy and consistency. Key Responsibilities **Data Analysis & Reporting:** Build and maintain **month-end accounting and tax dashboards** using SQL and Snowsight in Snowflake. Transition reporting processes to **Databricks**, creating dashboards and reports to support finance and accounting teams. Gather requirements from finance and accounting stakeholders to design and deliver actionable insights. **Data Transformation & Aggregation:** Develop and implement data transformation pipelines in **Databricks** to aggregate financial data and create **balance sheet look-forward views**. Ensure data accuracy and consistency during the migration from Snowflake to Databricks. Collaborate with the data engineering team to optimize data ingestion and transformation processes. **Data Integration & ERP Collaboration:** Support the integration of financial data from the data warehouse into **NetSuite ERP** by ensuring data is properly transformed and validated. Work with cross-functional teams to ensure seamless data flow between systems. **Data Ingestion & Tools:** Understand and work with **Fivetran** for data ingestion (no need to be an expert, but familiarity is required). Troubleshoot and resolve data-related issues in collaboration with the data engineering team. Additional Qualifications 3+ years of experience as a **Data Analyst** or similar role, preferably in a financial or accounting context. Strong proficiency in **SQL** and experience with **Snowflake** and **Databricks**. Experience building dashboards and reports for financial data (e.g., month-end close, tax reporting, balance sheets). Familiarity with **Fivetran** or similar data ingestion tools. Understanding of financial data concepts (e.g., general ledger, journals, balance sheets, income statements). Experience with data transformation and aggregation in a cloud-based environment. Strong communication skills to collaborate with finance and accounting teams. Nice-to-have: Experience with **NetSuite ERP** or similar financial systems. Show more Show less
Posted 1 week ago
65.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
What We Offer At Magna, you can expect an engaging and dynamic environment where you can help to develop industry-leading automotive technologies. We invest in our employees, providing them with the support and resources they need to succeed. As a member of our global team, you can expect exciting, varied responsibilities as well as a wide range of development prospects. Because we believe that your career path should be as unique as you are. Group Summary Magna is more than one of the world’s largest suppliers in the automotive space. We are a mobility technology company built to innovate, with a global, entrepreneurial-minded team. With 65+ years of expertise, our ecosystem of interconnected products combined with our complete vehicle expertise uniquely positions us to advance mobility in an expanded transportation landscape. Job Responsibilities The Senior Power BI Developer will be responsible for interpreting the business needs and transforming them into powerful Power BI report or other data insights related products / apps. This includes the design, development, maintenance, integration and reporting of business systems through cubes, ad-hoc reports, and dashboards in relation with trending technologies such as Microsoft Fabric or Databricks. The selected candidate will work closely with international team members from Europe, North America and Asia. Major Responsibilities Collaborate with business analysts and stakeholders to understand data visualization requirements and translate them into effective BI solutions. Design and develop visually appealing and user-friendly reports, dashboards, and interactive data visualizations to present complex insights in a comprehensible manner. Leverage proficiency in DAX to create calculated measures, columns, and tables that enhance data analysis capabilities within Power BI models. Develop and optimize ETL processes using Power Query, SQL, Databricks, and MS Fabric to transform and integrate data from diverse sources, ensuring accuracy and consistency. Leverage Databricks' or MS Fabric’s capabilities, including Apache Spark, Delta Lake, and AutoML/AzureML to enhance data processing and analytics. Implement best practices for data modeling, performance optimization, and data governance within Power BI projects. Work closely with database administrators and data engineers to ensure seamless data flow from source systems to Power BI and maintain data integrity. Identify and address performance bottlenecks in Power BI reports and dashboards. Optimize queries and data models for improved speed and efficiency. Implement security measures to ensure the confidentiality and integrity of data in Power BI. Ensure compliance with data governance and privacy policies. Create and maintain the documentation on all owned Power BI reports. Stay up to date with Power BI advancements and industry trends, constantly seek for better, more optimized solutions and technologies and implement newly gained knowledge to Magna’s PBI processes. Provide training sessions and technical support to end users to foster self-service analytics and maximize Power BI utilization. Provide the support to junior team members. Collaborate with cross-functional teams to identify opportunities for data-driven insights and contribute to strategic decision-making processes. Knowledge and Education Completion of University Degree. Work Experience More than 3 Years of Work-Related Experience. Skills And Competencies Required To Perform The Job More than 3 years experience in the development of Business Intelligence solutions based on Microsoft Tabular models including Power BI visualization and complex DAX expressions. Experience in designing and implementing data models both on Tabular, SQL or Delta Lake based storage solutions. Solid understanding of ETL processes, Data Warehouse and Lakehouse concepts. Strong SQL coding skills. Advanced skills in Microsoft BI stack (including Analysis Services, Paginated Reports and Power Pivot, Azure SQL). Experience with Synapse, Data Flow, AutoML/AzureML, Tabular Editor or Dax Studio is a plus. Knowledge of programming languages (Python, C# or similar is a big plus). Self-motivated and self-managed with a high degree of analytical skills (quick comprehension, abstract thinking, recognize relationships). Ability to be a strong team member, and communicate effectively Ability to prioritize and multi-task, and reasonably estimate work effort for tasks. Excellent English language skills (written and verbal). Working Conditions and Environment Work in second or third shift (starting at 4:30 PM or later India time). Travel 10-25% regular travel. Any Additional Information Excellent English in spoken and written is a must. Ability to clarify the complex issues to the non-technical audience is mandatory. Work Environment Regular travel: 10-20% of the time. For dedicated and motivated employees, we offer an interesting and diversified job within a dynamic global team together with the individual and functional development in a professional environment of a global acting business. Fair treatment and a sense of responsibility towards employees are the principle of the Magna culture. We strive to offer an inspiring and motivating work environment. Additional Information We offer attractive benefits (e.g. discretionary performance bonus) and a salary which is in line with market conditions depending on your skills and experience. Awareness, Unity, Empowerment At Magna, we believe that a diverse workforce is critical to our success. That’s why we are proud to be an equal opportunity employer. We hire on the basis of experience and qualifications, and in consideration of job requirements, regardless of, in particular, color, ancestry, religion, gender, origin, sexual orientation, age, citizenship, marital status, disability or gender identity. Magna takes the privacy of your personal information seriously. We discourage you from sending applications via email to comply with GDPR requirements and your local Data Privacy Law. Worker Type Regular / Permanent Group Magna Corporate Show more Show less
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Databricks is a popular technology in the field of big data and analytics, and the job market for Databricks professionals in India is growing rapidly. Companies across various industries are actively looking for skilled individuals with expertise in Databricks to help them harness the power of data. If you are considering a career in Databricks, here is a detailed guide to help you navigate the job market in India.
The average salary range for Databricks professionals in India varies based on experience level: - Entry-level: INR 4-6 lakhs per annum - Mid-level: INR 8-12 lakhs per annum - Experienced: INR 15-25 lakhs per annum
In the field of Databricks, a typical career path may include: - Junior Developer - Senior Developer - Tech Lead - Architect
In addition to Databricks expertise, other skills that are often expected or helpful alongside Databricks include: - Apache Spark - Python/Scala programming - Data modeling - SQL - Data visualization tools
As you prepare for Databricks job interviews, make sure to brush up on your technical skills, stay updated with the latest trends in the field, and showcase your problem-solving abilities. With the right preparation and confidence, you can land your dream job in the exciting world of Databricks in India. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
16869 Jobs | Dublin
Wipro
9024 Jobs | Bengaluru
EY
7266 Jobs | London
Amazon
5652 Jobs | Seattle,WA
Uplers
5629 Jobs | Ahmedabad
IBM
5547 Jobs | Armonk
Oracle
5387 Jobs | Redwood City
Accenture in India
5156 Jobs | Dublin 2
Capgemini
3242 Jobs | Paris,France
Tata Consultancy Services
3099 Jobs | Thane